Feb 16 15:07:29 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 15:07:29 crc restorecon[4678]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:29 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 15:07:30 crc restorecon[4678]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 15:07:30 crc restorecon[4678]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 16 15:07:31 crc kubenswrapper[4835]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 15:07:31 crc kubenswrapper[4835]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 15:07:31 crc kubenswrapper[4835]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 15:07:31 crc kubenswrapper[4835]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 15:07:31 crc kubenswrapper[4835]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 15:07:31 crc kubenswrapper[4835]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.116090 4835 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124126 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124160 4835 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124172 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124183 4835 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124194 4835 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124205 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124216 4835 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124227 4835 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124238 4835 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124249 4835 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124259 4835 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124270 4835 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124280 4835 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124293 4835 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124302 4835 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124312 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124322 4835 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124332 4835 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124343 4835 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124356 4835 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124369 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124380 4835 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124390 4835 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124401 4835 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124413 4835 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124423 4835 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124433 4835 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124444 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124456 4835 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124466 4835 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124477 4835 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124487 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124498 4835 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124510 4835 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124520 4835 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124563 4835 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124573 4835 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124583 4835 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124593 4835 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124603 4835 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124613 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124620 4835 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124628 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124636 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124644 4835 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124652 4835 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124660 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124667 4835 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124675 4835 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124687 4835 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124697 4835 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124705 4835 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124713 4835 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124723 4835 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124734 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124742 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124751 4835 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124759 4835 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124767 4835 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124775 4835 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124785 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124793 4835 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124800 4835 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124808 4835 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124818 4835 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124827 4835 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124835 4835 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124844 4835 feature_gate.go:330] unrecognized feature gate: Example Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124852 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124860 4835 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.124868 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125035 4835 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125051 4835 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125067 4835 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125078 4835 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125089 4835 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125098 4835 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125109 4835 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125121 4835 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125131 4835 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125141 4835 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125153 4835 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125162 4835 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125171 4835 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125180 4835 flags.go:64] FLAG: --cgroup-root="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125189 4835 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125198 4835 flags.go:64] FLAG: --client-ca-file="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125207 4835 flags.go:64] FLAG: --cloud-config="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125216 4835 flags.go:64] FLAG: --cloud-provider="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125225 4835 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125235 4835 flags.go:64] FLAG: --cluster-domain="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125244 4835 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125253 4835 flags.go:64] FLAG: --config-dir="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125264 4835 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125273 4835 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125284 4835 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125294 4835 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125303 4835 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125312 4835 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125320 4835 flags.go:64] FLAG: --contention-profiling="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125329 4835 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125338 4835 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125348 4835 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125358 4835 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125368 4835 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125378 4835 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125387 4835 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125396 4835 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125406 4835 flags.go:64] FLAG: --enable-server="true" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125415 4835 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125427 4835 flags.go:64] FLAG: --event-burst="100" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125436 4835 flags.go:64] FLAG: --event-qps="50" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125446 4835 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125455 4835 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125464 4835 flags.go:64] FLAG: --eviction-hard="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125474 4835 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125483 4835 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125492 4835 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125501 4835 flags.go:64] FLAG: --eviction-soft="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125510 4835 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125518 4835 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125561 4835 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125570 4835 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125580 4835 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125589 4835 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125599 4835 flags.go:64] FLAG: --feature-gates="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125610 4835 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125619 4835 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125629 4835 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125638 4835 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125647 4835 flags.go:64] FLAG: --healthz-port="10248" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125656 4835 flags.go:64] FLAG: --help="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125665 4835 flags.go:64] FLAG: --hostname-override="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125674 4835 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125683 4835 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125692 4835 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125701 4835 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125709 4835 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125718 4835 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125728 4835 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125737 4835 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125746 4835 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125755 4835 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125764 4835 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125773 4835 flags.go:64] FLAG: --kube-reserved="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125782 4835 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125791 4835 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125800 4835 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125809 4835 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125818 4835 flags.go:64] FLAG: --lock-file="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125826 4835 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125835 4835 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125844 4835 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125857 4835 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125866 4835 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125875 4835 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125884 4835 flags.go:64] FLAG: --logging-format="text" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125893 4835 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125902 4835 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125911 4835 flags.go:64] FLAG: --manifest-url="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125921 4835 flags.go:64] FLAG: --manifest-url-header="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125935 4835 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125947 4835 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125960 4835 flags.go:64] FLAG: --max-pods="110" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125972 4835 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125983 4835 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.125995 4835 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126006 4835 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126018 4835 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126029 4835 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126041 4835 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126064 4835 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126075 4835 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126086 4835 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126097 4835 flags.go:64] FLAG: --pod-cidr="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126146 4835 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126165 4835 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126176 4835 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126186 4835 flags.go:64] FLAG: --pods-per-core="0" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126196 4835 flags.go:64] FLAG: --port="10250" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126204 4835 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126213 4835 flags.go:64] FLAG: --provider-id="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126222 4835 flags.go:64] FLAG: --qos-reserved="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126231 4835 flags.go:64] FLAG: --read-only-port="10255" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126241 4835 flags.go:64] FLAG: --register-node="true" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126250 4835 flags.go:64] FLAG: --register-schedulable="true" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126259 4835 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126274 4835 flags.go:64] FLAG: --registry-burst="10" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126283 4835 flags.go:64] FLAG: --registry-qps="5" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126292 4835 flags.go:64] FLAG: --reserved-cpus="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126301 4835 flags.go:64] FLAG: --reserved-memory="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126312 4835 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126321 4835 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126330 4835 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126338 4835 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126347 4835 flags.go:64] FLAG: --runonce="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126356 4835 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126365 4835 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126374 4835 flags.go:64] FLAG: --seccomp-default="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126383 4835 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126392 4835 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126401 4835 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126410 4835 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126420 4835 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126428 4835 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126437 4835 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126447 4835 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126456 4835 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126465 4835 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126474 4835 flags.go:64] FLAG: --system-cgroups="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126483 4835 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126498 4835 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126507 4835 flags.go:64] FLAG: --tls-cert-file="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126515 4835 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126554 4835 flags.go:64] FLAG: --tls-min-version="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126563 4835 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126573 4835 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126582 4835 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126591 4835 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126600 4835 flags.go:64] FLAG: --v="2" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126611 4835 flags.go:64] FLAG: --version="false" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126623 4835 flags.go:64] FLAG: --vmodule="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126633 4835 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.126642 4835 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.126887 4835 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.126901 4835 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.126912 4835 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.126923 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.126933 4835 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.126943 4835 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.126954 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.126966 4835 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.126977 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.126986 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.126994 4835 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127002 4835 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127010 4835 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127020 4835 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127030 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127038 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127047 4835 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127055 4835 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127064 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127072 4835 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127080 4835 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127088 4835 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127096 4835 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127106 4835 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127113 4835 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127121 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127129 4835 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127136 4835 feature_gate.go:330] unrecognized feature gate: Example Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127144 4835 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127151 4835 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127159 4835 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127168 4835 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127175 4835 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127183 4835 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127191 4835 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127198 4835 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127207 4835 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127214 4835 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127222 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127230 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127238 4835 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127246 4835 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127253 4835 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127261 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127268 4835 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127276 4835 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127284 4835 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127292 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127299 4835 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127306 4835 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127314 4835 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127325 4835 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127335 4835 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127344 4835 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127353 4835 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127363 4835 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127372 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127379 4835 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127388 4835 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127397 4835 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127405 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127413 4835 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127420 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127433 4835 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127440 4835 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127448 4835 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127456 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127464 4835 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127471 4835 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127480 4835 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.127488 4835 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.127511 4835 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.141423 4835 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.141473 4835 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141582 4835 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141595 4835 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141603 4835 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141612 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141618 4835 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141623 4835 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141629 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141636 4835 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141643 4835 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141649 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141655 4835 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141660 4835 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141666 4835 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141673 4835 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141679 4835 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141685 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141690 4835 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141696 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141701 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141706 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141712 4835 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141717 4835 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141723 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141728 4835 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141733 4835 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141739 4835 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141746 4835 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141755 4835 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141762 4835 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141770 4835 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141776 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141782 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141787 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141793 4835 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141800 4835 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141806 4835 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141811 4835 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141818 4835 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141823 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141829 4835 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141835 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141842 4835 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141848 4835 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141854 4835 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141859 4835 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141866 4835 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141873 4835 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141880 4835 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141886 4835 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141892 4835 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141898 4835 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141903 4835 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141909 4835 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141914 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141920 4835 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141925 4835 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141930 4835 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141935 4835 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141940 4835 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141946 4835 feature_gate.go:330] unrecognized feature gate: Example Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141951 4835 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141956 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141962 4835 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141967 4835 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141973 4835 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141978 4835 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141984 4835 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141989 4835 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141994 4835 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.141999 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142005 4835 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.142015 4835 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142205 4835 feature_gate.go:330] unrecognized feature gate: Example Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142214 4835 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142220 4835 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142225 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142233 4835 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142240 4835 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142245 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142253 4835 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142259 4835 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142265 4835 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142271 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142276 4835 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142281 4835 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142286 4835 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142292 4835 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142297 4835 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142302 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142307 4835 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142313 4835 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142318 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142323 4835 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142328 4835 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142333 4835 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142339 4835 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142344 4835 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142351 4835 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142358 4835 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142364 4835 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142369 4835 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142375 4835 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142381 4835 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142387 4835 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142394 4835 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142400 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142407 4835 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142414 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142420 4835 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142426 4835 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142432 4835 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142437 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142443 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142449 4835 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142454 4835 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142459 4835 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142464 4835 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142470 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142475 4835 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142481 4835 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142486 4835 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142491 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142496 4835 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142502 4835 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142507 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142512 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142518 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142523 4835 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142548 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142553 4835 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142558 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142563 4835 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142570 4835 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142576 4835 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142583 4835 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142590 4835 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142596 4835 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142603 4835 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142610 4835 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142616 4835 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142621 4835 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142626 4835 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.142634 4835 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.142644 4835 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.142844 4835 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.147310 4835 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.147422 4835 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.148823 4835 server.go:997] "Starting client certificate rotation" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.148851 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.150626 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-20 00:36:31.92250826 +0000 UTC Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.150749 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.190740 4835 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 15:07:31 crc kubenswrapper[4835]: E0216 15:07:31.194802 4835 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.196202 4835 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.218507 4835 log.go:25] "Validated CRI v1 runtime API" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.261948 4835 log.go:25] "Validated CRI v1 image API" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.263871 4835 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.268415 4835 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-16-15-02-56-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.268440 4835 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.282829 4835 manager.go:217] Machine: {Timestamp:2026-02-16 15:07:31.280179957 +0000 UTC m=+0.572172872 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9 BootID:ee98f291-ae22-4f9b-b939-b249002beb8e Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:18:8c:70 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:18:8c:70 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:12:b0:f5 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b8:5d:8b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:4f:70:6f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:fc:95:8a Speed:-1 Mtu:1496} {Name:eth10 MacAddress:a6:c1:fd:69:c6:ed Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:0e:a7:87:34:6e:5e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.283028 4835 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.283184 4835 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.283419 4835 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.283564 4835 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.283595 4835 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.283785 4835 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.283796 4835 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.284271 4835 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.284305 4835 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.284450 4835 state_mem.go:36] "Initialized new in-memory state store" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.284601 4835 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.290089 4835 kubelet.go:418] "Attempting to sync node with API server" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.290108 4835 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.290122 4835 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.290133 4835 kubelet.go:324] "Adding apiserver pod source" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.290143 4835 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.294007 4835 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.295453 4835 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.296330 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.296335 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Feb 16 15:07:31 crc kubenswrapper[4835]: E0216 15:07:31.296477 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Feb 16 15:07:31 crc kubenswrapper[4835]: E0216 15:07:31.296481 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.297933 4835 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.299640 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.299668 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.299677 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.299685 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.299698 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.299708 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.299716 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.299731 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.299740 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.299748 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.299773 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.299782 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.301870 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.302316 4835 server.go:1280] "Started kubelet" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.303565 4835 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.303559 4835 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 15:07:31 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.304597 4835 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.306687 4835 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.306858 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.306932 4835 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.307426 4835 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.307455 4835 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.307427 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 18:33:28.797397957 +0000 UTC Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.307652 4835 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 16 15:07:31 crc kubenswrapper[4835]: E0216 15:07:31.308643 4835 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.313588 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Feb 16 15:07:31 crc kubenswrapper[4835]: E0216 15:07:31.313685 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Feb 16 15:07:31 crc kubenswrapper[4835]: E0216 15:07:31.314127 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" interval="200ms" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.314698 4835 factory.go:55] Registering systemd factory Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.314723 4835 factory.go:221] Registration of the systemd container factory successfully Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.315616 4835 factory.go:153] Registering CRI-O factory Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.315664 4835 factory.go:221] Registration of the crio container factory successfully Feb 16 15:07:31 crc kubenswrapper[4835]: E0216 15:07:31.314499 4835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.175:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894c289f8b3d1d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 15:07:31.302289873 +0000 UTC m=+0.594282768,LastTimestamp:2026-02-16 15:07:31.302289873 +0000 UTC m=+0.594282768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.315776 4835 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.315810 4835 factory.go:103] Registering Raw factory Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.315836 4835 manager.go:1196] Started watching for new ooms in manager Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.316352 4835 server.go:460] "Adding debug handlers to kubelet server" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.316949 4835 manager.go:319] Starting recovery of all containers Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332305 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332380 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332404 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332424 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332442 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332460 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332480 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332497 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332520 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332564 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332582 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332600 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332619 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332643 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332660 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332681 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332699 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332760 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332778 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332795 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332813 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332832 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332850 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332867 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332887 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332910 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.332986 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333016 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333038 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333063 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333081 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333099 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333116 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333135 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333153 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333172 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333189 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333207 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333225 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333242 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333264 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333282 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333301 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333319 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333337 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333354 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333372 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333392 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333411 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333428 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333446 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333465 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333495 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333516 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333575 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333634 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333656 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333676 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333695 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333713 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333733 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333751 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333768 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333786 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333804 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333821 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333878 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333901 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333925 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333948 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333974 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.333996 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334021 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334038 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334055 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334071 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334090 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334110 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334127 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334146 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334163 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334180 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334198 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334216 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334237 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334255 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334273 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334291 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334313 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334334 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334351 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334368 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334386 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334406 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334424 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334443 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334460 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334478 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334496 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334516 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334700 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334723 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334743 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.334767 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335087 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335129 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335155 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335181 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335210 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335232 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335249 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335268 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335285 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335303 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335325 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335344 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335363 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335381 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335399 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335438 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335459 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335478 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335498 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335515 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335574 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335595 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335612 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335629 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335740 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335786 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335853 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335871 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335950 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.335983 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336002 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336019 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336039 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336057 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336074 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336091 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336109 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336129 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336147 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336167 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336185 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336203 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336220 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336243 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336269 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336294 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336317 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336341 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336370 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336394 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336417 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336495 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336521 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336569 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336589 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336607 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336627 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336654 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336674 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336691 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336709 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336728 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336744 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336763 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336793 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336810 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336827 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336844 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336861 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336877 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336896 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336921 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336970 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.336990 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337008 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337028 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337046 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337063 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337097 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337117 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337136 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337155 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337325 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337351 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337370 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337389 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337408 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337427 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337444 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337465 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337491 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337517 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337580 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337605 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337628 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337646 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.337668 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.339997 4835 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.340385 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.340415 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.340435 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.340454 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.340472 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.340487 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.340499 4835 reconstruct.go:97] "Volume reconstruction finished" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.340509 4835 reconciler.go:26] "Reconciler: start to sync state" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.354786 4835 manager.go:324] Recovery completed Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.370146 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.374200 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.374233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.374242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.375089 4835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.375133 4835 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.375143 4835 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.375168 4835 state_mem.go:36] "Initialized new in-memory state store" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.377081 4835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.377151 4835 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.377407 4835 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 15:07:31 crc kubenswrapper[4835]: E0216 15:07:31.377459 4835 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.379389 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Feb 16 15:07:31 crc kubenswrapper[4835]: E0216 15:07:31.379449 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.398078 4835 policy_none.go:49] "None policy: Start" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.399222 4835 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.399259 4835 state_mem.go:35] "Initializing new in-memory state store" Feb 16 15:07:31 crc kubenswrapper[4835]: E0216 15:07:31.408997 4835 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.452637 4835 manager.go:334] "Starting Device Plugin manager" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.453202 4835 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.453241 4835 server.go:79] "Starting device plugin registration server" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.453751 4835 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.453884 4835 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.454565 4835 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.454658 4835 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.454671 4835 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 15:07:31 crc kubenswrapper[4835]: E0216 15:07:31.460044 4835 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.478261 4835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.478377 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.479644 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.479709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.479719 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.479904 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.480130 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.480197 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.481129 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.481164 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.481174 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.481136 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.481218 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.481229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.481328 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.481352 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.481399 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.482186 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.482214 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.482241 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.482253 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.482222 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.482303 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.482395 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.482561 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.482602 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.483137 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.483169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.483182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.483331 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.483409 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.483434 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.484016 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.484043 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.484054 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.484079 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.484096 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.484108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.484096 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.484201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.484209 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.484232 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.484211 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.484813 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.484856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.484872 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:31 crc kubenswrapper[4835]: E0216 15:07:31.515315 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" interval="400ms" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.543350 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.543856 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.543875 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.543908 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.543922 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.543937 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.543950 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.544024 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.544050 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.544103 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.544139 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.544159 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.544174 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.544188 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.544240 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.555258 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.556409 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.556463 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.556471 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.556492 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 15:07:31 crc kubenswrapper[4835]: E0216 15:07:31.556997 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.175:6443: connect: connection refused" node="crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645463 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645516 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645558 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645582 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645601 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645619 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645640 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645660 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645683 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645697 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645722 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645770 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645722 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645782 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645799 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645708 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645725 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645847 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645851 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645872 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645911 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645923 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645916 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645916 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645939 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645958 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645979 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645982 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.645994 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.646105 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.757746 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.759185 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.759221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.759229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.759252 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 15:07:31 crc kubenswrapper[4835]: E0216 15:07:31.759615 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.175:6443: connect: connection refused" node="crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.822441 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.849359 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.874788 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.891075 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-870bcc91df0c9b9488a73651e722bed8a1698d5b1b707b7d8a370702e6920ba7 WatchSource:0}: Error finding container 870bcc91df0c9b9488a73651e722bed8a1698d5b1b707b7d8a370702e6920ba7: Status 404 returned error can't find the container with id 870bcc91df0c9b9488a73651e722bed8a1698d5b1b707b7d8a370702e6920ba7 Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.892223 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: I0216 15:07:31.899635 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.906152 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-f484ec1538463e86564c654536ad61689f7be6f45fa4ddb607b689beac3dd1cc WatchSource:0}: Error finding container f484ec1538463e86564c654536ad61689f7be6f45fa4ddb607b689beac3dd1cc: Status 404 returned error can't find the container with id f484ec1538463e86564c654536ad61689f7be6f45fa4ddb607b689beac3dd1cc Feb 16 15:07:31 crc kubenswrapper[4835]: E0216 15:07:31.916638 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" interval="800ms" Feb 16 15:07:31 crc kubenswrapper[4835]: W0216 15:07:31.919834 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-4d933957eeaae31d41ed95c5e2af0a43a237c3146f112f174e059e45ae1bcfe2 WatchSource:0}: Error finding container 4d933957eeaae31d41ed95c5e2af0a43a237c3146f112f174e059e45ae1bcfe2: Status 404 returned error can't find the container with id 4d933957eeaae31d41ed95c5e2af0a43a237c3146f112f174e059e45ae1bcfe2 Feb 16 15:07:32 crc kubenswrapper[4835]: I0216 15:07:32.160360 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:32 crc kubenswrapper[4835]: I0216 15:07:32.161942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:32 crc kubenswrapper[4835]: I0216 15:07:32.161980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:32 crc kubenswrapper[4835]: I0216 15:07:32.161990 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:32 crc kubenswrapper[4835]: I0216 15:07:32.162034 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 15:07:32 crc kubenswrapper[4835]: E0216 15:07:32.162505 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.175:6443: connect: connection refused" node="crc" Feb 16 15:07:32 crc kubenswrapper[4835]: W0216 15:07:32.173269 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Feb 16 15:07:32 crc kubenswrapper[4835]: E0216 15:07:32.173336 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Feb 16 15:07:32 crc kubenswrapper[4835]: I0216 15:07:32.307520 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 06:52:23.810667685 +0000 UTC Feb 16 15:07:32 crc kubenswrapper[4835]: I0216 15:07:32.307594 4835 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Feb 16 15:07:32 crc kubenswrapper[4835]: I0216 15:07:32.381636 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4d933957eeaae31d41ed95c5e2af0a43a237c3146f112f174e059e45ae1bcfe2"} Feb 16 15:07:32 crc kubenswrapper[4835]: I0216 15:07:32.382984 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f484ec1538463e86564c654536ad61689f7be6f45fa4ddb607b689beac3dd1cc"} Feb 16 15:07:32 crc kubenswrapper[4835]: I0216 15:07:32.384454 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6bcb0c9bb1613e9d417acbfc931a16f997fd4b026514419a1bca62295f454de4"} Feb 16 15:07:32 crc kubenswrapper[4835]: I0216 15:07:32.385585 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"870bcc91df0c9b9488a73651e722bed8a1698d5b1b707b7d8a370702e6920ba7"} Feb 16 15:07:32 crc kubenswrapper[4835]: I0216 15:07:32.386554 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b72f470fa32c41af6d17e60ace8cbe945ab5e0fae715b2f5c65ff9e9399eca00"} Feb 16 15:07:32 crc kubenswrapper[4835]: W0216 15:07:32.466272 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Feb 16 15:07:32 crc kubenswrapper[4835]: E0216 15:07:32.466723 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Feb 16 15:07:32 crc kubenswrapper[4835]: W0216 15:07:32.589160 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Feb 16 15:07:32 crc kubenswrapper[4835]: E0216 15:07:32.589277 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Feb 16 15:07:32 crc kubenswrapper[4835]: W0216 15:07:32.615204 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Feb 16 15:07:32 crc kubenswrapper[4835]: E0216 15:07:32.615286 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Feb 16 15:07:32 crc kubenswrapper[4835]: E0216 15:07:32.717475 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" interval="1.6s" Feb 16 15:07:32 crc kubenswrapper[4835]: I0216 15:07:32.963520 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:32 crc kubenswrapper[4835]: I0216 15:07:32.965637 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:32 crc kubenswrapper[4835]: I0216 15:07:32.965674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:32 crc kubenswrapper[4835]: I0216 15:07:32.965686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:32 crc kubenswrapper[4835]: I0216 15:07:32.965709 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 15:07:32 crc kubenswrapper[4835]: E0216 15:07:32.966145 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.175:6443: connect: connection refused" node="crc" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.307646 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:31:03.161599445 +0000 UTC Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.308386 4835 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.330327 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 15:07:33 crc kubenswrapper[4835]: E0216 15:07:33.332364 4835 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.392999 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540" exitCode=0 Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.393127 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540"} Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.393162 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.394405 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.394461 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.394482 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.397274 4835 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8" exitCode=0 Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.397337 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8"} Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.397457 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.397707 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.399399 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.399449 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.399467 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.401246 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.401285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.401307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.403766 4835 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8" exitCode=0 Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.403866 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8"} Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.404026 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.405695 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.405726 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.405745 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.409017 4835 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="72b01f2bacc518a9e185c169def3bc6764e1ec4512af82519ea94a2f41f5cc5a" exitCode=0 Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.409145 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"72b01f2bacc518a9e185c169def3bc6764e1ec4512af82519ea94a2f41f5cc5a"} Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.409195 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.410409 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.410450 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.410468 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.412863 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952"} Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.412923 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323"} Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.412946 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79"} Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.412962 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029"} Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.413085 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.414115 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.414210 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:33 crc kubenswrapper[4835]: I0216 15:07:33.414230 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.307895 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 09:54:22.48895452 +0000 UTC Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.308256 4835 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Feb 16 15:07:34 crc kubenswrapper[4835]: E0216 15:07:34.319494 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" interval="3.2s" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.420609 4835 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a" exitCode=0 Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.420692 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a"} Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.420699 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.421551 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.421588 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.421598 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.423237 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b486eb5e8108cd7a9fb09f21e0bb25f8483521b95acbbb42bbb1b7078fc8c030"} Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.423267 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3f7f206bec33670fb3e912d933cf602a51c92b99fba2802d3c1fe79b1cd920c9"} Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.423277 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e174cfe63ecdfdaaa7051f8af8164e00f8295e42caf803bfe07fe758999af296"} Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.423310 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.425068 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.425110 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.425124 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.427268 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.427275 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fa003919dff9e8b7f05d24f459d9bccc04359e6db580f3bb4a311fefc6b515dd"} Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.428912 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.428938 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.428950 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.431501 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9b210f3dd07b38812514387f1b2c6562716cd3533936e020ac57751770f2c9f0"} Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.431558 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50"} Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.431581 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8"} Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.431595 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76"} Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.431607 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d"} Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.431612 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.431562 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.432458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.432471 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.432491 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.432496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.432505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.432509 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:34 crc kubenswrapper[4835]: W0216 15:07:34.474946 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.175:6443: connect: connection refused Feb 16 15:07:34 crc kubenswrapper[4835]: E0216 15:07:34.475039 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.175:6443: connect: connection refused" logger="UnhandledError" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.566866 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.568196 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.568234 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.568243 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:34 crc kubenswrapper[4835]: I0216 15:07:34.568270 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 15:07:34 crc kubenswrapper[4835]: E0216 15:07:34.568771 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.175:6443: connect: connection refused" node="crc" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.308088 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 03:25:04.1271899 +0000 UTC Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.438432 4835 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c" exitCode=0 Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.438554 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c"} Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.438678 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.439740 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.440133 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.440377 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.440665 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.440711 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.441070 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.441102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.441110 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.441142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.441290 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.441313 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.441690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.441850 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.441909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.443433 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.443464 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.443481 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:35 crc kubenswrapper[4835]: I0216 15:07:35.724093 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:36 crc kubenswrapper[4835]: I0216 15:07:36.308392 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 08:21:35.345563672 +0000 UTC Feb 16 15:07:36 crc kubenswrapper[4835]: I0216 15:07:36.449688 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:07:36 crc kubenswrapper[4835]: I0216 15:07:36.449748 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:36 crc kubenswrapper[4835]: I0216 15:07:36.449758 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc"} Feb 16 15:07:36 crc kubenswrapper[4835]: I0216 15:07:36.449810 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1"} Feb 16 15:07:36 crc kubenswrapper[4835]: I0216 15:07:36.449832 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8"} Feb 16 15:07:36 crc kubenswrapper[4835]: I0216 15:07:36.449847 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59"} Feb 16 15:07:36 crc kubenswrapper[4835]: I0216 15:07:36.450955 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:36 crc kubenswrapper[4835]: I0216 15:07:36.451012 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:36 crc kubenswrapper[4835]: I0216 15:07:36.451029 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:37 crc kubenswrapper[4835]: I0216 15:07:37.308624 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:30:47.220000413 +0000 UTC Feb 16 15:07:37 crc kubenswrapper[4835]: I0216 15:07:37.373037 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 15:07:37 crc kubenswrapper[4835]: I0216 15:07:37.458771 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59"} Feb 16 15:07:37 crc kubenswrapper[4835]: I0216 15:07:37.458917 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:37 crc kubenswrapper[4835]: I0216 15:07:37.460152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:37 crc kubenswrapper[4835]: I0216 15:07:37.460200 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:37 crc kubenswrapper[4835]: I0216 15:07:37.460217 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:37 crc kubenswrapper[4835]: I0216 15:07:37.768921 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:37 crc kubenswrapper[4835]: I0216 15:07:37.770567 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:37 crc kubenswrapper[4835]: I0216 15:07:37.770604 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:37 crc kubenswrapper[4835]: I0216 15:07:37.770617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:37 crc kubenswrapper[4835]: I0216 15:07:37.770640 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 15:07:38 crc kubenswrapper[4835]: I0216 15:07:38.026622 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 16 15:07:38 crc kubenswrapper[4835]: I0216 15:07:38.309448 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:25:46.888862357 +0000 UTC Feb 16 15:07:38 crc kubenswrapper[4835]: I0216 15:07:38.462001 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:38 crc kubenswrapper[4835]: I0216 15:07:38.463353 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:38 crc kubenswrapper[4835]: I0216 15:07:38.463402 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:38 crc kubenswrapper[4835]: I0216 15:07:38.463420 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:39 crc kubenswrapper[4835]: I0216 15:07:39.037362 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:39 crc kubenswrapper[4835]: I0216 15:07:39.037508 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:07:39 crc kubenswrapper[4835]: I0216 15:07:39.037591 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:39 crc kubenswrapper[4835]: I0216 15:07:39.038747 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:39 crc kubenswrapper[4835]: I0216 15:07:39.038776 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:39 crc kubenswrapper[4835]: I0216 15:07:39.038788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:39 crc kubenswrapper[4835]: I0216 15:07:39.310138 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:23:19.934291809 +0000 UTC Feb 16 15:07:39 crc kubenswrapper[4835]: I0216 15:07:39.464590 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:39 crc kubenswrapper[4835]: I0216 15:07:39.465948 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:39 crc kubenswrapper[4835]: I0216 15:07:39.465998 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:39 crc kubenswrapper[4835]: I0216 15:07:39.466018 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:40 crc kubenswrapper[4835]: I0216 15:07:40.056389 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:40 crc kubenswrapper[4835]: I0216 15:07:40.056585 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:40 crc kubenswrapper[4835]: I0216 15:07:40.057638 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:40 crc kubenswrapper[4835]: I0216 15:07:40.057668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:40 crc kubenswrapper[4835]: I0216 15:07:40.057680 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:40 crc kubenswrapper[4835]: I0216 15:07:40.310317 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 17:52:51.122167471 +0000 UTC Feb 16 15:07:40 crc kubenswrapper[4835]: I0216 15:07:40.926255 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 15:07:40 crc kubenswrapper[4835]: I0216 15:07:40.926561 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:40 crc kubenswrapper[4835]: I0216 15:07:40.928090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:40 crc kubenswrapper[4835]: I0216 15:07:40.928156 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:40 crc kubenswrapper[4835]: I0216 15:07:40.928190 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:40 crc kubenswrapper[4835]: I0216 15:07:40.950898 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:07:40 crc kubenswrapper[4835]: I0216 15:07:40.951196 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:40 crc kubenswrapper[4835]: I0216 15:07:40.952684 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:40 crc kubenswrapper[4835]: I0216 15:07:40.952780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:40 crc kubenswrapper[4835]: I0216 15:07:40.952801 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:40 crc kubenswrapper[4835]: I0216 15:07:40.973581 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:07:41 crc kubenswrapper[4835]: I0216 15:07:41.036959 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 15:07:41 crc kubenswrapper[4835]: I0216 15:07:41.037188 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:41 crc kubenswrapper[4835]: I0216 15:07:41.038878 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:41 crc kubenswrapper[4835]: I0216 15:07:41.038934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:41 crc kubenswrapper[4835]: I0216 15:07:41.038949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:41 crc kubenswrapper[4835]: I0216 15:07:41.310801 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:54:14.021130598 +0000 UTC Feb 16 15:07:41 crc kubenswrapper[4835]: E0216 15:07:41.460167 4835 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 15:07:41 crc kubenswrapper[4835]: I0216 15:07:41.468949 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:41 crc kubenswrapper[4835]: I0216 15:07:41.470282 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:41 crc kubenswrapper[4835]: I0216 15:07:41.470357 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:41 crc kubenswrapper[4835]: I0216 15:07:41.470383 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:42 crc kubenswrapper[4835]: I0216 15:07:42.311041 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 17:52:03.709514035 +0000 UTC Feb 16 15:07:42 crc kubenswrapper[4835]: I0216 15:07:42.504820 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:07:42 crc kubenswrapper[4835]: I0216 15:07:42.505045 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:42 crc kubenswrapper[4835]: I0216 15:07:42.506614 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:42 crc kubenswrapper[4835]: I0216 15:07:42.506653 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:42 crc kubenswrapper[4835]: I0216 15:07:42.506665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:42 crc kubenswrapper[4835]: I0216 15:07:42.601866 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:07:42 crc kubenswrapper[4835]: I0216 15:07:42.609330 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:07:43 crc kubenswrapper[4835]: I0216 15:07:43.312191 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:56:04.523578836 +0000 UTC Feb 16 15:07:43 crc kubenswrapper[4835]: I0216 15:07:43.473845 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:43 crc kubenswrapper[4835]: I0216 15:07:43.474914 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:43 crc kubenswrapper[4835]: I0216 15:07:43.474948 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:43 crc kubenswrapper[4835]: I0216 15:07:43.474961 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:43 crc kubenswrapper[4835]: I0216 15:07:43.478872 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:07:43 crc kubenswrapper[4835]: I0216 15:07:43.974316 4835 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 15:07:43 crc kubenswrapper[4835]: I0216 15:07:43.974820 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:07:44 crc kubenswrapper[4835]: I0216 15:07:44.312629 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 15:06:52.240356131 +0000 UTC Feb 16 15:07:44 crc kubenswrapper[4835]: I0216 15:07:44.476192 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:44 crc kubenswrapper[4835]: I0216 15:07:44.477325 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:44 crc kubenswrapper[4835]: I0216 15:07:44.477380 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:44 crc kubenswrapper[4835]: I0216 15:07:44.477398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:44 crc kubenswrapper[4835]: W0216 15:07:44.810627 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 15:07:44 crc kubenswrapper[4835]: I0216 15:07:44.810708 4835 trace.go:236] Trace[796076411]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 15:07:34.809) (total time: 10000ms): Feb 16 15:07:44 crc kubenswrapper[4835]: Trace[796076411]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (15:07:44.810) Feb 16 15:07:44 crc kubenswrapper[4835]: Trace[796076411]: [10.000716902s] [10.000716902s] END Feb 16 15:07:44 crc kubenswrapper[4835]: E0216 15:07:44.810728 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 15:07:44 crc kubenswrapper[4835]: W0216 15:07:44.822118 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 15:07:44 crc kubenswrapper[4835]: I0216 15:07:44.822193 4835 trace.go:236] Trace[136393756]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 15:07:34.820) (total time: 10001ms): Feb 16 15:07:44 crc kubenswrapper[4835]: Trace[136393756]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:07:44.822) Feb 16 15:07:44 crc kubenswrapper[4835]: Trace[136393756]: [10.001354488s] [10.001354488s] END Feb 16 15:07:44 crc kubenswrapper[4835]: E0216 15:07:44.822209 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.307726 4835 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.314174 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 05:23:02.002925448 +0000 UTC Feb 16 15:07:45 crc kubenswrapper[4835]: W0216 15:07:45.445350 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.445473 4835 trace.go:236] Trace[264835891]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 15:07:35.443) (total time: 10001ms): Feb 16 15:07:45 crc kubenswrapper[4835]: Trace[264835891]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:07:45.445) Feb 16 15:07:45 crc kubenswrapper[4835]: Trace[264835891]: [10.001668615s] [10.001668615s] END Feb 16 15:07:45 crc kubenswrapper[4835]: E0216 15:07:45.445504 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.480076 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.481806 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9b210f3dd07b38812514387f1b2c6562716cd3533936e020ac57751770f2c9f0" exitCode=255 Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.481914 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.481919 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9b210f3dd07b38812514387f1b2c6562716cd3533936e020ac57751770f2c9f0"} Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.482086 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.482504 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.482544 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.482552 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.483119 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.483149 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.483160 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.483644 4835 scope.go:117] "RemoveContainer" containerID="9b210f3dd07b38812514387f1b2c6562716cd3533936e020ac57751770f2c9f0" Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.574715 4835 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.574793 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.578493 4835 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 15:07:45 crc kubenswrapper[4835]: I0216 15:07:45.578559 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 15:07:46 crc kubenswrapper[4835]: I0216 15:07:46.315260 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 23:51:54.278609963 +0000 UTC Feb 16 15:07:46 crc kubenswrapper[4835]: I0216 15:07:46.485423 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 15:07:46 crc kubenswrapper[4835]: I0216 15:07:46.486864 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7"} Feb 16 15:07:46 crc kubenswrapper[4835]: I0216 15:07:46.486982 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:46 crc kubenswrapper[4835]: I0216 15:07:46.487692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:46 crc kubenswrapper[4835]: I0216 15:07:46.487749 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:46 crc kubenswrapper[4835]: I0216 15:07:46.487762 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:47 crc kubenswrapper[4835]: I0216 15:07:47.316093 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:22:07.633449276 +0000 UTC Feb 16 15:07:48 crc kubenswrapper[4835]: I0216 15:07:48.316885 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 01:13:59.552791172 +0000 UTC Feb 16 15:07:48 crc kubenswrapper[4835]: I0216 15:07:48.781250 4835 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 15:07:49 crc kubenswrapper[4835]: I0216 15:07:49.043179 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:49 crc kubenswrapper[4835]: I0216 15:07:49.043374 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:49 crc kubenswrapper[4835]: I0216 15:07:49.043611 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:49 crc kubenswrapper[4835]: I0216 15:07:49.044624 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:49 crc kubenswrapper[4835]: I0216 15:07:49.044674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:49 crc kubenswrapper[4835]: I0216 15:07:49.044684 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:49 crc kubenswrapper[4835]: I0216 15:07:49.048229 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:49 crc kubenswrapper[4835]: I0216 15:07:49.317053 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 13:08:29.271339621 +0000 UTC Feb 16 15:07:49 crc kubenswrapper[4835]: I0216 15:07:49.493393 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:49 crc kubenswrapper[4835]: I0216 15:07:49.494368 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:49 crc kubenswrapper[4835]: I0216 15:07:49.494413 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:49 crc kubenswrapper[4835]: I0216 15:07:49.494422 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.198050 4835 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.317919 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:10:32.238306467 +0000 UTC Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.496524 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.498140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.498193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.498203 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:50 crc kubenswrapper[4835]: E0216 15:07:50.572112 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 15:07:50 crc kubenswrapper[4835]: E0216 15:07:50.578780 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.579184 4835 trace.go:236] Trace[1747147617]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 15:07:37.705) (total time: 12873ms): Feb 16 15:07:50 crc kubenswrapper[4835]: Trace[1747147617]: ---"Objects listed" error: 12873ms (15:07:50.578) Feb 16 15:07:50 crc kubenswrapper[4835]: Trace[1747147617]: [12.873274236s] [12.873274236s] END Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.579219 4835 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.581642 4835 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.588205 4835 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.617676 4835 csr.go:261] certificate signing request csr-vwr89 is approved, waiting to be issued Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.625784 4835 csr.go:257] certificate signing request csr-vwr89 is issued Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.977949 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.978145 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.979370 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.979401 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.979413 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:50 crc kubenswrapper[4835]: I0216 15:07:50.983212 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.068281 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.068600 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.070616 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.070656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.070665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.088928 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.149191 4835 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 15:07:51 crc kubenswrapper[4835]: W0216 15:07:51.149436 4835 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 15:07:51 crc kubenswrapper[4835]: W0216 15:07:51.149488 4835 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.149372 4835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.175:46950->38.102.83.175:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894c28a1c9d3993 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 15:07:31.904788883 +0000 UTC m=+1.196781778,LastTimestamp:2026-02-16 15:07:31.904788883 +0000 UTC m=+1.196781778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.250279 4835 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.301491 4835 apiserver.go:52] "Watching apiserver" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.305095 4835 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.305464 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.306338 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.306412 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.306482 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.306808 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.306860 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.306909 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.306937 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.306990 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.307425 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.308872 4835 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.309722 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.309717 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.314213 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.314662 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.314959 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.315229 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.315605 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.315870 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.316309 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.318095 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:20:42.149899588 +0000 UTC Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.368998 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386380 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386438 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386464 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386486 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386514 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386557 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386581 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386603 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386627 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386649 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386669 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386690 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386713 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386735 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386757 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386779 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386804 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386831 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386827 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386853 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386878 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386914 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386941 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386968 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386990 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387014 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387040 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387064 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387089 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387116 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387142 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387166 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387186 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387211 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387233 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387346 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387396 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387422 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387445 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387467 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387489 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387512 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387537 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387575 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387595 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387616 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387644 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387669 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387698 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387718 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387737 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387759 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387779 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387802 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387823 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387844 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387875 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387900 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387923 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387944 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387964 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388017 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388215 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388237 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388258 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388279 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388300 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388323 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388346 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388373 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388400 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388422 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388445 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388466 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388486 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388509 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388531 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388574 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388603 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388627 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388651 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388674 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388696 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388719 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388743 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388768 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388792 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388815 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388841 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388866 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388894 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388921 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388948 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388980 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389002 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389027 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389054 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389078 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389102 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389125 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389148 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389173 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389196 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389221 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389245 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389267 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389293 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389318 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389347 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389375 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389401 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389426 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389448 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389470 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389494 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389515 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389559 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389587 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389614 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389638 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389663 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389686 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389707 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389732 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389759 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389784 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389807 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389830 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389855 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389882 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389905 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389928 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389951 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389975 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390011 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390033 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390054 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390074 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390100 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390122 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390147 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390171 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390194 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390214 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390235 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390258 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390278 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390300 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390322 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390347 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390371 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390395 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390417 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390438 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390464 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390488 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390511 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390537 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390574 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390594 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390617 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390641 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390663 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390712 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390736 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390765 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390788 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390809 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390833 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390854 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390876 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390899 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390922 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390946 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390969 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390991 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.391015 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.391036 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.391058 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.391081 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.391105 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.393044 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.394916 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386846 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.386996 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387048 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.402939 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387056 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387197 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387364 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387561 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387610 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387790 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387884 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387884 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.387938 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388212 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388254 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388286 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388351 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388397 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388499 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388504 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388565 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388601 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388738 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388798 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388804 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388834 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388874 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.388966 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389019 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389135 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389201 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389228 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389269 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389284 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389300 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389321 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389445 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389497 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389534 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389577 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389580 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389634 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389652 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389702 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389799 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389839 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.389914 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390053 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390090 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390126 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.390227 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.392139 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:07:51.892020756 +0000 UTC m=+21.184013651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.392245 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.392450 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.392484 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.392904 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.393153 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.393225 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.393736 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.394006 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.394117 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.394172 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.394257 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.394432 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.394670 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.394788 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.394895 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.395184 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.395743 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.395947 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.396346 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.397581 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.397946 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.403227 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.403303 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.403688 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.403715 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.403729 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.404000 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.404152 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.404217 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.404409 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.404473 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.405186 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.405259 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.405703 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.406174 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.406688 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.406784 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.407261 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.407301 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.409658 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.410885 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.410971 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.411125 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.411377 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.411389 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.411404 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.412357 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413056 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413042 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413331 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413368 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413388 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413427 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413454 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413488 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413511 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413604 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413701 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413733 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413757 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413777 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413798 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413819 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413838 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413894 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413931 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413960 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413993 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.414016 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.414051 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.414077 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.414110 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.414136 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.414158 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.414183 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.414210 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.414236 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.414264 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.414389 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.414425 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.413796 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.414701 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.415068 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.415482 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.415700 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.416968 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.417438 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.417778 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.418290 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.418708 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.420230 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.419615 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.419797 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.419814 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.419820 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.420059 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.420056 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.420647 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.422062 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.422838 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.422903 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.423151 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.423576 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.423659 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.423761 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.423763 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:51.923735949 +0000 UTC m=+21.215728854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.423854 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:51.923835741 +0000 UTC m=+21.215828626 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.423911 4835 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.424517 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.424721 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.425055 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.425232 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.425319 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.425814 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.426036 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.426098 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.426111 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.426501 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.426307 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.426369 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.426398 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.426439 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.427046 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.427122 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.427246 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.428028 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.428201 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.428888 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.428966 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.429093 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.430379 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.433153 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.433238 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.433998 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.435052 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.435133 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.435605 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.435910 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.436028 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.436337 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.436352 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.436587 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.436743 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.437010 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.437077 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.437240 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.437357 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.444294 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.444344 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.444372 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.444461 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:51.944434677 +0000 UTC m=+21.236427572 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.452151 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.452190 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.452211 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.452464 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:51.952430119 +0000 UTC m=+21.244423024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.452926 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.453192 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.453655 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.454340 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.454620 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.454748 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.455392 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.455495 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.455695 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.455727 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.455775 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.459172 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.459413 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.459468 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.459656 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.462816 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.463188 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.463232 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.464225 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.463093 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.464486 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.465111 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.465626 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.466454 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.466795 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.467182 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.467559 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.467857 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.467912 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.468101 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.468277 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.469456 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.469843 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.470262 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.470285 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.470642 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.473075 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.473503 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.477206 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.493032 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.503230 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.504027 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.508935 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.514398 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515000 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515045 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515126 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515138 4835 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515150 4835 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515160 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515169 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515180 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515191 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515205 4835 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515216 4835 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515228 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515238 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515249 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515259 4835 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515313 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515364 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515389 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515402 4835 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515413 4835 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515423 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515434 4835 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515447 4835 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515458 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515472 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515482 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515491 4835 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515499 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515508 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515519 4835 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515536 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515562 4835 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515574 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515584 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515593 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515604 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515616 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515626 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515638 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515651 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515664 4835 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515675 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515686 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515696 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515707 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515717 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515728 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515739 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515749 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515761 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515772 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515782 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515794 4835 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515804 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515815 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515826 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515837 4835 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515848 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515861 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515872 4835 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515887 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515900 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515911 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515922 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515934 4835 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515944 4835 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515954 4835 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515965 4835 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515974 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515986 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.515999 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516012 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516023 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516035 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516065 4835 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516076 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516087 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516097 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516107 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516119 4835 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516131 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516143 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516156 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516167 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516178 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516193 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516204 4835 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516213 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516223 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516233 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516244 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516255 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516264 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516274 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516285 4835 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516296 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516306 4835 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516316 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516326 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516337 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516347 4835 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516357 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516368 4835 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516377 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516388 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516399 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516411 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516422 4835 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516432 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516441 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516451 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516462 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516471 4835 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516479 4835 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516489 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516499 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516509 4835 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516527 4835 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516554 4835 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516567 4835 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516577 4835 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516588 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516597 4835 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516608 4835 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516618 4835 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516628 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516637 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516645 4835 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516656 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516667 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516678 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516688 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516698 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516709 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516719 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516730 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516740 4835 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516751 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516761 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516771 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516782 4835 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516793 4835 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516802 4835 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516813 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516823 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516833 4835 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516843 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516853 4835 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516862 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516876 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516887 4835 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516897 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516908 4835 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516918 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516928 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516938 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516948 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516958 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516968 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516979 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.516992 4835 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517003 4835 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517013 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517024 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517035 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517046 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517056 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517063 4835 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517071 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517079 4835 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517086 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517094 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517101 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517109 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517117 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517125 4835 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517132 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517140 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517148 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517155 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517163 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517173 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517182 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517190 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517199 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517210 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517220 4835 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517230 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517240 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517250 4835 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517258 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517265 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517274 4835 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517282 4835 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517290 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.517300 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.522249 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.528897 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.540660 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.555432 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.565965 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.576940 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.585719 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.596885 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.605868 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.612711 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-vhqvm"] Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.613004 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vhqvm" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.615808 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.616162 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.616299 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.618003 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.629868 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.630257 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-16 15:02:50 +0000 UTC, rotation deadline is 2026-10-30 14:55:49.464497802 +0000 UTC Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.630310 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6143h47m57.834191747s for next certificate rotation Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.638327 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.638731 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.644451 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 15:07:51 crc kubenswrapper[4835]: W0216 15:07:51.652661 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-c51c61ade85c710b1e4070ce98b2d4ce282b9a2cda5ae181c1be1b0bdde1e1c6 WatchSource:0}: Error finding container c51c61ade85c710b1e4070ce98b2d4ce282b9a2cda5ae181c1be1b0bdde1e1c6: Status 404 returned error can't find the container with id c51c61ade85c710b1e4070ce98b2d4ce282b9a2cda5ae181c1be1b0bdde1e1c6 Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.655610 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: W0216 15:07:51.659839 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-fd1f2c7e3ee50e4ff38caf21beaa9de7b115f41f40b488fd0b7562cea2a2ccb9 WatchSource:0}: Error finding container fd1f2c7e3ee50e4ff38caf21beaa9de7b115f41f40b488fd0b7562cea2a2ccb9: Status 404 returned error can't find the container with id fd1f2c7e3ee50e4ff38caf21beaa9de7b115f41f40b488fd0b7562cea2a2ccb9 Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.676520 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.693287 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.712815 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-kklmz"] Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.713851 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kklmz" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.716079 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.717901 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.718131 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.718405 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.718824 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9fe09143-7647-46a2-9631-18ef4f37f58e-hosts-file\") pod \"node-resolver-vhqvm\" (UID: \"9fe09143-7647-46a2-9631-18ef4f37f58e\") " pod="openshift-dns/node-resolver-vhqvm" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.718881 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhbf5\" (UniqueName: \"kubernetes.io/projected/9fe09143-7647-46a2-9631-18ef4f37f58e-kube-api-access-dhbf5\") pod \"node-resolver-vhqvm\" (UID: \"9fe09143-7647-46a2-9631-18ef4f37f58e\") " pod="openshift-dns/node-resolver-vhqvm" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.719058 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.733877 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.753887 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.771252 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.792511 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.808411 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.812884 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.819561 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4aa94d4d-554e-4fab-9df4-426bbaa96ea8-host\") pod \"node-ca-kklmz\" (UID: \"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\") " pod="openshift-image-registry/node-ca-kklmz" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.819604 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4aa94d4d-554e-4fab-9df4-426bbaa96ea8-serviceca\") pod \"node-ca-kklmz\" (UID: \"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\") " pod="openshift-image-registry/node-ca-kklmz" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.819625 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j8r8\" (UniqueName: \"kubernetes.io/projected/4aa94d4d-554e-4fab-9df4-426bbaa96ea8-kube-api-access-8j8r8\") pod \"node-ca-kklmz\" (UID: \"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\") " pod="openshift-image-registry/node-ca-kklmz" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.819644 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhbf5\" (UniqueName: \"kubernetes.io/projected/9fe09143-7647-46a2-9631-18ef4f37f58e-kube-api-access-dhbf5\") pod \"node-resolver-vhqvm\" (UID: \"9fe09143-7647-46a2-9631-18ef4f37f58e\") " pod="openshift-dns/node-resolver-vhqvm" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.819671 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9fe09143-7647-46a2-9631-18ef4f37f58e-hosts-file\") pod \"node-resolver-vhqvm\" (UID: \"9fe09143-7647-46a2-9631-18ef4f37f58e\") " pod="openshift-dns/node-resolver-vhqvm" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.819725 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9fe09143-7647-46a2-9631-18ef4f37f58e-hosts-file\") pod \"node-resolver-vhqvm\" (UID: \"9fe09143-7647-46a2-9631-18ef4f37f58e\") " pod="openshift-dns/node-resolver-vhqvm" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.824365 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.849825 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.854365 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhbf5\" (UniqueName: \"kubernetes.io/projected/9fe09143-7647-46a2-9631-18ef4f37f58e-kube-api-access-dhbf5\") pod \"node-resolver-vhqvm\" (UID: \"9fe09143-7647-46a2-9631-18ef4f37f58e\") " pod="openshift-dns/node-resolver-vhqvm" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.867464 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.882575 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.892422 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.902001 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.914248 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.920791 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.920880 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4aa94d4d-554e-4fab-9df4-426bbaa96ea8-host\") pod \"node-ca-kklmz\" (UID: \"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\") " pod="openshift-image-registry/node-ca-kklmz" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.920918 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4aa94d4d-554e-4fab-9df4-426bbaa96ea8-serviceca\") pod \"node-ca-kklmz\" (UID: \"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\") " pod="openshift-image-registry/node-ca-kklmz" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.920940 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j8r8\" (UniqueName: \"kubernetes.io/projected/4aa94d4d-554e-4fab-9df4-426bbaa96ea8-kube-api-access-8j8r8\") pod \"node-ca-kklmz\" (UID: \"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\") " pod="openshift-image-registry/node-ca-kklmz" Feb 16 15:07:51 crc kubenswrapper[4835]: E0216 15:07:51.920988 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:07:52.920944423 +0000 UTC m=+22.212937318 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.921009 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4aa94d4d-554e-4fab-9df4-426bbaa96ea8-host\") pod \"node-ca-kklmz\" (UID: \"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\") " pod="openshift-image-registry/node-ca-kklmz" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.923038 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4aa94d4d-554e-4fab-9df4-426bbaa96ea8-serviceca\") pod \"node-ca-kklmz\" (UID: \"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\") " pod="openshift-image-registry/node-ca-kklmz" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.924829 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vhqvm" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.927127 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:51 crc kubenswrapper[4835]: I0216 15:07:51.946428 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j8r8\" (UniqueName: \"kubernetes.io/projected/4aa94d4d-554e-4fab-9df4-426bbaa96ea8-kube-api-access-8j8r8\") pod \"node-ca-kklmz\" (UID: \"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\") " pod="openshift-image-registry/node-ca-kklmz" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.021320 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.021359 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.021380 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:07:52 crc kubenswrapper[4835]: E0216 15:07:52.021486 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 15:07:52 crc kubenswrapper[4835]: E0216 15:07:52.021561 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:53.021517521 +0000 UTC m=+22.313510416 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 15:07:52 crc kubenswrapper[4835]: E0216 15:07:52.021611 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 15:07:52 crc kubenswrapper[4835]: E0216 15:07:52.021637 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:53.021629554 +0000 UTC m=+22.313622449 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.021646 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:07:52 crc kubenswrapper[4835]: E0216 15:07:52.021691 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 15:07:52 crc kubenswrapper[4835]: E0216 15:07:52.021807 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 15:07:52 crc kubenswrapper[4835]: E0216 15:07:52.021835 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:52 crc kubenswrapper[4835]: E0216 15:07:52.021724 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 15:07:52 crc kubenswrapper[4835]: E0216 15:07:52.021915 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 15:07:52 crc kubenswrapper[4835]: E0216 15:07:52.021929 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:52 crc kubenswrapper[4835]: E0216 15:07:52.021972 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:53.021964692 +0000 UTC m=+22.313957577 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:52 crc kubenswrapper[4835]: E0216 15:07:52.022167 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:53.022131706 +0000 UTC m=+22.314124611 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.035400 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kklmz" Feb 16 15:07:52 crc kubenswrapper[4835]: W0216 15:07:52.048943 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aa94d4d_554e_4fab_9df4_426bbaa96ea8.slice/crio-46abd2c59a2cab02efd0be23f7f9fe91fbb52f53dae86d970c890c6b0b27d365 WatchSource:0}: Error finding container 46abd2c59a2cab02efd0be23f7f9fe91fbb52f53dae86d970c890c6b0b27d365: Status 404 returned error can't find the container with id 46abd2c59a2cab02efd0be23f7f9fe91fbb52f53dae86d970c890c6b0b27d365 Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.156341 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-gncxk"] Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.156791 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.156969 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-nd4kl"] Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.157323 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.157988 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rq4qc"] Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.158681 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.160505 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.160620 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.161341 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.161359 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.165158 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.165305 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.165433 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.165509 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.165443 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.165753 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.165686 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.165965 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.185478 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.196301 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.210222 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.221064 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.230528 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.239758 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.250517 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.260931 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.273152 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.281635 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.293727 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.303358 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.315244 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.318287 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 11:38:24.256561595 +0000 UTC Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.329898 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d233f2c8-6963-48c1-889e-ef20f52ad5b1-proxy-tls\") pod \"machine-config-daemon-nd4kl\" (UID: \"d233f2c8-6963-48c1-889e-ef20f52ad5b1\") " pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.329960 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-multus-conf-dir\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.329994 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-run-k8s-cni-cncf-io\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330024 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-run-multus-certs\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330050 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-var-lib-cni-bin\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330073 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/91e35405-0016-467d-9081-272eba8c8aa1-os-release\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330100 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drnx7\" (UniqueName: \"kubernetes.io/projected/91e35405-0016-467d-9081-272eba8c8aa1-kube-api-access-drnx7\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330140 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-var-lib-kubelet\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330205 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/91e35405-0016-467d-9081-272eba8c8aa1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330239 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bnbt\" (UniqueName: \"kubernetes.io/projected/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-kube-api-access-6bnbt\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330273 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgcbz\" (UniqueName: \"kubernetes.io/projected/d233f2c8-6963-48c1-889e-ef20f52ad5b1-kube-api-access-sgcbz\") pod \"machine-config-daemon-nd4kl\" (UID: \"d233f2c8-6963-48c1-889e-ef20f52ad5b1\") " pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330341 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-var-lib-cni-multus\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330382 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-hostroot\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330406 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-multus-daemon-config\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330433 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-multus-cni-dir\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330451 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/91e35405-0016-467d-9081-272eba8c8aa1-cnibin\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330470 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-cnibin\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330487 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-cni-binary-copy\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330626 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-etc-kubernetes\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330653 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/91e35405-0016-467d-9081-272eba8c8aa1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330678 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/91e35405-0016-467d-9081-272eba8c8aa1-cni-binary-copy\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330708 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-multus-socket-dir-parent\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330728 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d233f2c8-6963-48c1-889e-ef20f52ad5b1-rootfs\") pod \"machine-config-daemon-nd4kl\" (UID: \"d233f2c8-6963-48c1-889e-ef20f52ad5b1\") " pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330818 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d233f2c8-6963-48c1-889e-ef20f52ad5b1-mcd-auth-proxy-config\") pod \"machine-config-daemon-nd4kl\" (UID: \"d233f2c8-6963-48c1-889e-ef20f52ad5b1\") " pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330870 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-os-release\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330932 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/91e35405-0016-467d-9081-272eba8c8aa1-system-cni-dir\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.330988 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-run-netns\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.331146 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-system-cni-dir\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.378331 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:07:52 crc kubenswrapper[4835]: E0216 15:07:52.378506 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.431920 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-cni-binary-copy\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432000 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-etc-kubernetes\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432023 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/91e35405-0016-467d-9081-272eba8c8aa1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432052 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/91e35405-0016-467d-9081-272eba8c8aa1-cni-binary-copy\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432075 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-multus-socket-dir-parent\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432102 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d233f2c8-6963-48c1-889e-ef20f52ad5b1-rootfs\") pod \"machine-config-daemon-nd4kl\" (UID: \"d233f2c8-6963-48c1-889e-ef20f52ad5b1\") " pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432120 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d233f2c8-6963-48c1-889e-ef20f52ad5b1-mcd-auth-proxy-config\") pod \"machine-config-daemon-nd4kl\" (UID: \"d233f2c8-6963-48c1-889e-ef20f52ad5b1\") " pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432134 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-os-release\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432150 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/91e35405-0016-467d-9081-272eba8c8aa1-system-cni-dir\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432171 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-run-netns\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432190 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-system-cni-dir\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432229 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d233f2c8-6963-48c1-889e-ef20f52ad5b1-proxy-tls\") pod \"machine-config-daemon-nd4kl\" (UID: \"d233f2c8-6963-48c1-889e-ef20f52ad5b1\") " pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432253 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-multus-conf-dir\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432276 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-run-k8s-cni-cncf-io\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432295 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-run-multus-certs\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432311 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/91e35405-0016-467d-9081-272eba8c8aa1-os-release\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432328 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-var-lib-cni-bin\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432344 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drnx7\" (UniqueName: \"kubernetes.io/projected/91e35405-0016-467d-9081-272eba8c8aa1-kube-api-access-drnx7\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432378 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-var-lib-kubelet\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432394 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/91e35405-0016-467d-9081-272eba8c8aa1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432418 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgcbz\" (UniqueName: \"kubernetes.io/projected/d233f2c8-6963-48c1-889e-ef20f52ad5b1-kube-api-access-sgcbz\") pod \"machine-config-daemon-nd4kl\" (UID: \"d233f2c8-6963-48c1-889e-ef20f52ad5b1\") " pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432435 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-var-lib-cni-multus\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432452 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-hostroot\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432469 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-multus-daemon-config\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432488 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bnbt\" (UniqueName: \"kubernetes.io/projected/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-kube-api-access-6bnbt\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432505 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-multus-cni-dir\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432531 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-cnibin\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432568 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/91e35405-0016-467d-9081-272eba8c8aa1-cnibin\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.432645 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/91e35405-0016-467d-9081-272eba8c8aa1-cnibin\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.441430 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-etc-kubernetes\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.442370 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/91e35405-0016-467d-9081-272eba8c8aa1-cni-binary-copy\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.442451 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-multus-socket-dir-parent\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.442485 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d233f2c8-6963-48c1-889e-ef20f52ad5b1-rootfs\") pod \"machine-config-daemon-nd4kl\" (UID: \"d233f2c8-6963-48c1-889e-ef20f52ad5b1\") " pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.442989 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d233f2c8-6963-48c1-889e-ef20f52ad5b1-mcd-auth-proxy-config\") pod \"machine-config-daemon-nd4kl\" (UID: \"d233f2c8-6963-48c1-889e-ef20f52ad5b1\") " pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.443204 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-os-release\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.443273 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/91e35405-0016-467d-9081-272eba8c8aa1-system-cni-dir\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.443365 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-var-lib-kubelet\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.443530 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-run-netns\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.443590 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-system-cni-dir\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.444093 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/91e35405-0016-467d-9081-272eba8c8aa1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.444445 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-var-lib-cni-multus\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.444582 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-hostroot\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.445129 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-multus-daemon-config\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.445500 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-multus-cni-dir\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.445576 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-cnibin\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.445654 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-multus-conf-dir\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.445697 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-run-k8s-cni-cncf-io\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.445785 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/91e35405-0016-467d-9081-272eba8c8aa1-os-release\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.445827 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-var-lib-cni-bin\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.445857 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-host-run-multus-certs\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.451043 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d233f2c8-6963-48c1-889e-ef20f52ad5b1-proxy-tls\") pod \"machine-config-daemon-nd4kl\" (UID: \"d233f2c8-6963-48c1-889e-ef20f52ad5b1\") " pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.523585 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-cni-binary-copy\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.525910 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.526588 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.528394 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7" exitCode=255 Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.528455 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7"} Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.528509 4835 scope.go:117] "RemoveContainer" containerID="9b210f3dd07b38812514387f1b2c6562716cd3533936e020ac57751770f2c9f0" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.532139 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/91e35405-0016-467d-9081-272eba8c8aa1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.542050 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kklmz" event={"ID":"4aa94d4d-554e-4fab-9df4-426bbaa96ea8","Type":"ContainerStarted","Data":"8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95"} Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.542095 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kklmz" event={"ID":"4aa94d4d-554e-4fab-9df4-426bbaa96ea8","Type":"ContainerStarted","Data":"46abd2c59a2cab02efd0be23f7f9fe91fbb52f53dae86d970c890c6b0b27d365"} Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.547500 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6nwz6"] Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.547798 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.548577 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.549816 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bnbt\" (UniqueName: \"kubernetes.io/projected/36a4edb0-ce1a-4b59-b1f9-f5b43255de2d-kube-api-access-6bnbt\") pod \"multus-gncxk\" (UID: \"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\") " pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.550074 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drnx7\" (UniqueName: \"kubernetes.io/projected/91e35405-0016-467d-9081-272eba8c8aa1-kube-api-access-drnx7\") pod \"multus-additional-cni-plugins-rq4qc\" (UID: \"91e35405-0016-467d-9081-272eba8c8aa1\") " pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.551314 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.551600 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.551761 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.551899 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.552017 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.552167 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.552404 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.555496 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f"} Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.555654 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31"} Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.555690 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c51c61ade85c710b1e4070ce98b2d4ce282b9a2cda5ae181c1be1b0bdde1e1c6"} Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.568411 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10"} Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.568467 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"68705b1dbafeb34cdd25370fe20b4d8d97913dea46fc22a2da3a85ef993f057f"} Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.572082 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgcbz\" (UniqueName: \"kubernetes.io/projected/d233f2c8-6963-48c1-889e-ef20f52ad5b1-kube-api-access-sgcbz\") pod \"machine-config-daemon-nd4kl\" (UID: \"d233f2c8-6963-48c1-889e-ef20f52ad5b1\") " pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.572747 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vhqvm" event={"ID":"9fe09143-7647-46a2-9631-18ef4f37f58e","Type":"ContainerStarted","Data":"2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80"} Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.572776 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vhqvm" event={"ID":"9fe09143-7647-46a2-9631-18ef4f37f58e","Type":"ContainerStarted","Data":"b4ec6c215e291ada1119b7550ce4566b406f0ae9c2a9f3ff0d17ed405c6cef29"} Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.576232 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fd1f2c7e3ee50e4ff38caf21beaa9de7b115f41f40b488fd0b7562cea2a2ccb9"} Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.579083 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.599281 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.612079 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.636724 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-openvswitch\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.636764 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.636781 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-systemd\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.636795 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovnkube-config\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.636847 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-kubelet\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.636861 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-run-netns\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.636891 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-env-overrides\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.636919 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-var-lib-openvswitch\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.636934 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-run-ovn-kubernetes\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.636950 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovn-node-metrics-cert\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.636966 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrwvw\" (UniqueName: \"kubernetes.io/projected/9a790a22-cc2f-414e-b43b-fd6df80d19da-kube-api-access-vrwvw\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.636982 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-log-socket\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.637015 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-cni-bin\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.637038 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-systemd-units\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.637051 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-cni-netd\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.637073 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-slash\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.637086 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovnkube-script-lib\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.637101 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-ovn\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.637116 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-node-log\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.637136 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-etc-openvswitch\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.640687 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.644852 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.645316 4835 scope.go:117] "RemoveContainer" containerID="21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7" Feb 16 15:07:52 crc kubenswrapper[4835]: E0216 15:07:52.645577 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.657504 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.688081 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.702136 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.717765 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.733714 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738075 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-openvswitch\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738118 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738144 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-systemd\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738167 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovnkube-config\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738204 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-kubelet\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738226 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-run-netns\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738250 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-env-overrides\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738273 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-var-lib-openvswitch\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738267 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-openvswitch\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738339 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-run-ovn-kubernetes\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738295 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-run-ovn-kubernetes\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738390 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-kubelet\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738395 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovn-node-metrics-cert\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738573 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-log-socket\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738602 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrwvw\" (UniqueName: \"kubernetes.io/projected/9a790a22-cc2f-414e-b43b-fd6df80d19da-kube-api-access-vrwvw\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738628 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-cni-bin\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738659 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-cni-netd\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738686 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-systemd-units\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738702 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-slash\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738721 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovnkube-script-lib\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738748 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-ovn\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738765 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-node-log\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738782 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-etc-openvswitch\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738859 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-etc-openvswitch\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738884 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-systemd\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.739135 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-env-overrides\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.739148 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-cni-netd\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738396 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-run-netns\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.739233 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-log-socket\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.739329 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-systemd-units\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.739471 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-ovn\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.739552 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-cni-bin\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.739555 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-node-log\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.739595 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-var-lib-openvswitch\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.739605 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-slash\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.738325 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.739694 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovnkube-config\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.739834 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovnkube-script-lib\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.742919 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovn-node-metrics-cert\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.763563 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.772896 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gncxk" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.773950 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrwvw\" (UniqueName: \"kubernetes.io/projected/9a790a22-cc2f-414e-b43b-fd6df80d19da-kube-api-access-vrwvw\") pod \"ovnkube-node-6nwz6\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.785551 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.790745 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.820280 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:52 crc kubenswrapper[4835]: W0216 15:07:52.827135 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91e35405_0016_467d_9081_272eba8c8aa1.slice/crio-fa6523a9f623749c821aff9a9b9cd8655acdee2db8121833c1b833fba453747b WatchSource:0}: Error finding container fa6523a9f623749c821aff9a9b9cd8655acdee2db8121833c1b833fba453747b: Status 404 returned error can't find the container with id fa6523a9f623749c821aff9a9b9cd8655acdee2db8121833c1b833fba453747b Feb 16 15:07:52 crc kubenswrapper[4835]: W0216 15:07:52.828894 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd233f2c8_6963_48c1_889e_ef20f52ad5b1.slice/crio-ebf1b8df3d8575a5fdeabc4ebbc6f5e07f87bc0fee5ea5841c851e8f9ce25870 WatchSource:0}: Error finding container ebf1b8df3d8575a5fdeabc4ebbc6f5e07f87bc0fee5ea5841c851e8f9ce25870: Status 404 returned error can't find the container with id ebf1b8df3d8575a5fdeabc4ebbc6f5e07f87bc0fee5ea5841c851e8f9ce25870 Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.842394 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.855418 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.872050 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.884689 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:52 crc kubenswrapper[4835]: W0216 15:07:52.908357 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a790a22_cc2f_414e_b43b_fd6df80d19da.slice/crio-f53f650d0776d32d3427dbae6c234ce3becf1abd6f19c6de452bfb0da8df7312 WatchSource:0}: Error finding container f53f650d0776d32d3427dbae6c234ce3becf1abd6f19c6de452bfb0da8df7312: Status 404 returned error can't find the container with id f53f650d0776d32d3427dbae6c234ce3becf1abd6f19c6de452bfb0da8df7312 Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.909710 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.940658 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:07:52 crc kubenswrapper[4835]: E0216 15:07:52.941078 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:07:54.941035909 +0000 UTC m=+24.233028804 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.954619 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b210f3dd07b38812514387f1b2c6562716cd3533936e020ac57751770f2c9f0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:45Z\\\",\\\"message\\\":\\\"W0216 15:07:34.483097 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 15:07:34.483432 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771254454 cert, and key in /tmp/serving-cert-2870345511/serving-signer.crt, /tmp/serving-cert-2870345511/serving-signer.key\\\\nI0216 15:07:34.729516 1 observer_polling.go:159] Starting file observer\\\\nW0216 15:07:34.732166 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 15:07:34.732501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 15:07:34.736681 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2870345511/tls.crt::/tmp/serving-cert-2870345511/tls.key\\\\\\\"\\\\nF0216 15:07:45.360922 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:52 crc kubenswrapper[4835]: I0216 15:07:52.992240 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:52Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.032036 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.041920 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.041967 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.041999 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.042026 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:07:53 crc kubenswrapper[4835]: E0216 15:07:53.042143 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 15:07:53 crc kubenswrapper[4835]: E0216 15:07:53.042159 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 15:07:53 crc kubenswrapper[4835]: E0216 15:07:53.042171 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:53 crc kubenswrapper[4835]: E0216 15:07:53.042210 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:55.042196591 +0000 UTC m=+24.334189476 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:53 crc kubenswrapper[4835]: E0216 15:07:53.042258 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 15:07:53 crc kubenswrapper[4835]: E0216 15:07:53.042267 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 15:07:53 crc kubenswrapper[4835]: E0216 15:07:53.042276 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:53 crc kubenswrapper[4835]: E0216 15:07:53.042281 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 15:07:53 crc kubenswrapper[4835]: E0216 15:07:53.042306 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:55.042298124 +0000 UTC m=+24.334291019 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:53 crc kubenswrapper[4835]: E0216 15:07:53.042323 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:55.042312794 +0000 UTC m=+24.334305689 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 15:07:53 crc kubenswrapper[4835]: E0216 15:07:53.042680 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 15:07:53 crc kubenswrapper[4835]: E0216 15:07:53.042759 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:55.042741745 +0000 UTC m=+24.334734640 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.073699 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.117390 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.156082 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.198318 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.236258 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.275211 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.318781 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:18:44.370527865 +0000 UTC Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.321165 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.377937 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:07:53 crc kubenswrapper[4835]: E0216 15:07:53.378058 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.378121 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:07:53 crc kubenswrapper[4835]: E0216 15:07:53.378160 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.386737 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.387519 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.388851 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.389471 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.391657 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.392213 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.392968 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.394223 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.395002 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.396170 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.396945 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.398323 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.398960 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.399589 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.400714 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.401298 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.402485 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.403365 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.404434 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.405555 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.406120 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.407655 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.408167 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.409624 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.410092 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.411619 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.412831 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.413414 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.414780 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.415336 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.416354 4835 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.416483 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.419150 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.420378 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.421042 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.423016 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.423912 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.425083 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.425921 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.427215 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.427871 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.429097 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.429982 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.431181 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.431801 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.432967 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.433614 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.434981 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.435638 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.436755 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.437378 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.438486 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.439209 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.439957 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.579917 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.582118 4835 scope.go:117] "RemoveContainer" containerID="21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7" Feb 16 15:07:53 crc kubenswrapper[4835]: E0216 15:07:53.582330 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.583563 4835 generic.go:334] "Generic (PLEG): container finished" podID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerID="d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba" exitCode=0 Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.583605 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerDied","Data":"d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba"} Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.583624 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerStarted","Data":"f53f650d0776d32d3427dbae6c234ce3becf1abd6f19c6de452bfb0da8df7312"} Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.585767 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerStarted","Data":"cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5"} Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.585826 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerStarted","Data":"cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b"} Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.585840 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerStarted","Data":"ebf1b8df3d8575a5fdeabc4ebbc6f5e07f87bc0fee5ea5841c851e8f9ce25870"} Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.587465 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gncxk" event={"ID":"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d","Type":"ContainerStarted","Data":"de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce"} Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.587575 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gncxk" event={"ID":"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d","Type":"ContainerStarted","Data":"18bf4207747099d74818e0d65d6121c9ba000f6cec430a6f5741740bb27b8646"} Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.595320 4835 generic.go:334] "Generic (PLEG): container finished" podID="91e35405-0016-467d-9081-272eba8c8aa1" containerID="cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca" exitCode=0 Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.595491 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" event={"ID":"91e35405-0016-467d-9081-272eba8c8aa1","Type":"ContainerDied","Data":"cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca"} Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.595606 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" event={"ID":"91e35405-0016-467d-9081-272eba8c8aa1","Type":"ContainerStarted","Data":"fa6523a9f623749c821aff9a9b9cd8655acdee2db8121833c1b833fba453747b"} Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.599110 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.614224 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.624500 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.643471 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.662004 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.675174 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.690503 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.702760 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.715643 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.727550 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.761280 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.792462 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.842788 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.882732 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.913102 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.954004 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:53 crc kubenswrapper[4835]: I0216 15:07:53.991517 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:53Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.034006 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.077194 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.115649 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.153981 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.192068 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.232923 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.273191 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.319476 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:28:56.441991778 +0000 UTC Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.320731 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.354580 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.377612 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:07:54 crc kubenswrapper[4835]: E0216 15:07:54.377727 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.413575 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.443716 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.471063 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.513011 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.600653 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerStarted","Data":"3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb"} Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.600692 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerStarted","Data":"32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211"} Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.600708 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerStarted","Data":"d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571"} Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.600717 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerStarted","Data":"3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6"} Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.602367 4835 generic.go:334] "Generic (PLEG): container finished" podID="91e35405-0016-467d-9081-272eba8c8aa1" containerID="e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176" exitCode=0 Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.602383 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" event={"ID":"91e35405-0016-467d-9081-272eba8c8aa1","Type":"ContainerDied","Data":"e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176"} Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.603836 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0"} Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.615462 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.627800 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.638816 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.674299 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.715604 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.753143 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.796716 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.842955 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.873515 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.911640 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.953508 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.960793 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:07:54 crc kubenswrapper[4835]: E0216 15:07:54.960899 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:07:58.960879071 +0000 UTC m=+28.252871966 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:07:54 crc kubenswrapper[4835]: I0216 15:07:54.991625 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.032759 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.062568 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.062637 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.062667 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.062692 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:07:55 crc kubenswrapper[4835]: E0216 15:07:55.062811 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 15:07:55 crc kubenswrapper[4835]: E0216 15:07:55.062898 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:59.062873833 +0000 UTC m=+28.354866738 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 15:07:55 crc kubenswrapper[4835]: E0216 15:07:55.062826 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 15:07:55 crc kubenswrapper[4835]: E0216 15:07:55.062950 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 15:07:55 crc kubenswrapper[4835]: E0216 15:07:55.062990 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 15:07:55 crc kubenswrapper[4835]: E0216 15:07:55.063025 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 15:07:55 crc kubenswrapper[4835]: E0216 15:07:55.063069 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:55 crc kubenswrapper[4835]: E0216 15:07:55.062992 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 15:07:55 crc kubenswrapper[4835]: E0216 15:07:55.063112 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:55 crc kubenswrapper[4835]: E0216 15:07:55.063000 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:59.062978496 +0000 UTC m=+28.354971391 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 15:07:55 crc kubenswrapper[4835]: E0216 15:07:55.063163 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:59.06314771 +0000 UTC m=+28.355140825 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:55 crc kubenswrapper[4835]: E0216 15:07:55.063196 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 15:07:59.063185491 +0000 UTC m=+28.355178626 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.073089 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.111995 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.153200 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.191549 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.230509 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.271986 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.311193 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.320426 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 07:16:39.53653067 +0000 UTC Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.352095 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.378163 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.378270 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:07:55 crc kubenswrapper[4835]: E0216 15:07:55.378427 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:07:55 crc kubenswrapper[4835]: E0216 15:07:55.378509 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.392456 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.433383 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.475491 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.524743 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.570421 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.593878 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.610691 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerStarted","Data":"8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48"} Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.610773 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerStarted","Data":"271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1"} Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.612999 4835 generic.go:334] "Generic (PLEG): container finished" podID="91e35405-0016-467d-9081-272eba8c8aa1" containerID="6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d" exitCode=0 Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.613088 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" event={"ID":"91e35405-0016-467d-9081-272eba8c8aa1","Type":"ContainerDied","Data":"6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d"} Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.638232 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.673553 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.710177 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.755805 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.790914 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.831962 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.872515 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.910647 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.949741 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:55 crc kubenswrapper[4835]: I0216 15:07:55.992504 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.031262 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.072785 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.136573 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.180517 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.193225 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.231700 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.271080 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.309286 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.321175 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:28:57.906073228 +0000 UTC Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.378063 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:07:56 crc kubenswrapper[4835]: E0216 15:07:56.378193 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.618073 4835 generic.go:334] "Generic (PLEG): container finished" podID="91e35405-0016-467d-9081-272eba8c8aa1" containerID="756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84" exitCode=0 Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.618120 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" event={"ID":"91e35405-0016-467d-9081-272eba8c8aa1","Type":"ContainerDied","Data":"756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84"} Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.649677 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.663082 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.688139 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.698308 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.709058 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.726895 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.737880 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.750782 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.763593 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.773578 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.784172 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.795336 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.832559 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.873748 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.916112 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.974761 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.975378 4835 scope.go:117] "RemoveContainer" containerID="21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7" Feb 16 15:07:56 crc kubenswrapper[4835]: E0216 15:07:56.975547 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.979520 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.982283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.982315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.982323 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.982368 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.990402 4835 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.990717 4835 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.991696 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.991737 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.991753 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.991773 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:56 crc kubenswrapper[4835]: I0216 15:07:56.991788 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:56Z","lastTransitionTime":"2026-02-16T15:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:57 crc kubenswrapper[4835]: E0216 15:07:57.005702 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.008579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.008608 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.008645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.008660 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.008670 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:57Z","lastTransitionTime":"2026-02-16T15:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:57 crc kubenswrapper[4835]: E0216 15:07:57.020327 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.024000 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.024033 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.024045 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.024068 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.024078 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:57Z","lastTransitionTime":"2026-02-16T15:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:57 crc kubenswrapper[4835]: E0216 15:07:57.036720 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.039972 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.040023 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.040038 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.040056 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.040071 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:57Z","lastTransitionTime":"2026-02-16T15:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:57 crc kubenswrapper[4835]: E0216 15:07:57.057416 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.061086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.061343 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.061428 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.061510 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.061609 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:57Z","lastTransitionTime":"2026-02-16T15:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:57 crc kubenswrapper[4835]: E0216 15:07:57.072966 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: E0216 15:07:57.073100 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.074956 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.074997 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.075006 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.075020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.075030 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:57Z","lastTransitionTime":"2026-02-16T15:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.177355 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.177384 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.177393 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.177406 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.177414 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:57Z","lastTransitionTime":"2026-02-16T15:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.280208 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.280238 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.280249 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.280266 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.280279 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:57Z","lastTransitionTime":"2026-02-16T15:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.321870 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:36:43.653243748 +0000 UTC Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.378142 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:07:57 crc kubenswrapper[4835]: E0216 15:07:57.378264 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.378331 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:07:57 crc kubenswrapper[4835]: E0216 15:07:57.378429 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.381866 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.381890 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.381900 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.381930 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.381941 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:57Z","lastTransitionTime":"2026-02-16T15:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.484733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.485099 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.485282 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.485454 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.485710 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:57Z","lastTransitionTime":"2026-02-16T15:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.588405 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.588463 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.588483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.588511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.588572 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:57Z","lastTransitionTime":"2026-02-16T15:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.625248 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerStarted","Data":"6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9"} Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.627920 4835 generic.go:334] "Generic (PLEG): container finished" podID="91e35405-0016-467d-9081-272eba8c8aa1" containerID="5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab" exitCode=0 Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.627955 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" event={"ID":"91e35405-0016-467d-9081-272eba8c8aa1","Type":"ContainerDied","Data":"5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab"} Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.652876 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.672168 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.685617 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.690367 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.690395 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.690404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.690418 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.690427 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:57Z","lastTransitionTime":"2026-02-16T15:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.704848 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.723050 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.740969 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.761850 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.780392 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.792218 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.792246 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.792255 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.792267 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.792278 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:57Z","lastTransitionTime":"2026-02-16T15:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.794060 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.809071 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.823281 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.836681 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.848881 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.862770 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.874858 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.894489 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.894510 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.894517 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.894546 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.894555 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:57Z","lastTransitionTime":"2026-02-16T15:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.996797 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.996851 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.996867 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.996891 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:57 crc kubenswrapper[4835]: I0216 15:07:57.996911 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:57Z","lastTransitionTime":"2026-02-16T15:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.099073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.099102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.099110 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.099122 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.099131 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:58Z","lastTransitionTime":"2026-02-16T15:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.201122 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.201166 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.201194 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.201212 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.201225 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:58Z","lastTransitionTime":"2026-02-16T15:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.303829 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.303869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.303877 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.303890 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.303900 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:58Z","lastTransitionTime":"2026-02-16T15:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.322369 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:05:17.224053351 +0000 UTC Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.378008 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:07:58 crc kubenswrapper[4835]: E0216 15:07:58.378169 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.406156 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.406211 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.406228 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.406252 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.406271 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:58Z","lastTransitionTime":"2026-02-16T15:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.508296 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.508334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.508343 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.508357 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.508367 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:58Z","lastTransitionTime":"2026-02-16T15:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.610880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.611148 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.611264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.611380 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.611477 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:58Z","lastTransitionTime":"2026-02-16T15:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.633732 4835 generic.go:334] "Generic (PLEG): container finished" podID="91e35405-0016-467d-9081-272eba8c8aa1" containerID="3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd" exitCode=0 Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.633797 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" event={"ID":"91e35405-0016-467d-9081-272eba8c8aa1","Type":"ContainerDied","Data":"3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd"} Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.648664 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.668673 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.690677 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.714280 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.714323 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.714336 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.714352 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.714364 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:58Z","lastTransitionTime":"2026-02-16T15:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.714822 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.728716 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.741423 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.754519 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.766821 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.783876 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.796648 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.810192 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.819686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.819735 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.819747 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.819764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.819775 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:58Z","lastTransitionTime":"2026-02-16T15:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.827767 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.839756 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.848670 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.860689 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.921954 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.921995 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.922004 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.922019 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:58 crc kubenswrapper[4835]: I0216 15:07:58.922027 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:58Z","lastTransitionTime":"2026-02-16T15:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.003155 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:07:59 crc kubenswrapper[4835]: E0216 15:07:59.003386 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:08:07.003349954 +0000 UTC m=+36.295342889 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.024385 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.024425 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.024433 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.024447 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.024458 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:59Z","lastTransitionTime":"2026-02-16T15:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.104490 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.104573 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.104631 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:07:59 crc kubenswrapper[4835]: E0216 15:07:59.104722 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 15:07:59 crc kubenswrapper[4835]: E0216 15:07:59.104777 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 15:07:59 crc kubenswrapper[4835]: E0216 15:07:59.104796 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 15:07:59 crc kubenswrapper[4835]: E0216 15:07:59.104811 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:59 crc kubenswrapper[4835]: E0216 15:07:59.104807 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 15:08:07.104787762 +0000 UTC m=+36.396780667 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 15:07:59 crc kubenswrapper[4835]: E0216 15:07:59.104868 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 15:08:07.104859544 +0000 UTC m=+36.396852449 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:59 crc kubenswrapper[4835]: E0216 15:07:59.104961 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 15:07:59 crc kubenswrapper[4835]: E0216 15:07:59.105006 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 15:07:59 crc kubenswrapper[4835]: E0216 15:07:59.105028 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:59 crc kubenswrapper[4835]: E0216 15:07:59.105102 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 15:08:07.105077829 +0000 UTC m=+36.397070754 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.105190 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:07:59 crc kubenswrapper[4835]: E0216 15:07:59.105286 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 15:07:59 crc kubenswrapper[4835]: E0216 15:07:59.105335 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 15:08:07.105321275 +0000 UTC m=+36.397314180 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.126767 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.126807 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.126817 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.126831 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.126843 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:59Z","lastTransitionTime":"2026-02-16T15:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.229119 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.229154 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.229164 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.229178 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.229186 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:59Z","lastTransitionTime":"2026-02-16T15:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.323090 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 05:49:15.181621642 +0000 UTC Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.331341 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.331379 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.331387 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.331400 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.331411 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:59Z","lastTransitionTime":"2026-02-16T15:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.378520 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.378595 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:07:59 crc kubenswrapper[4835]: E0216 15:07:59.378672 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:07:59 crc kubenswrapper[4835]: E0216 15:07:59.378735 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.433167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.433202 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.433211 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.433226 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.433236 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:59Z","lastTransitionTime":"2026-02-16T15:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.536385 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.536437 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.536451 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.536469 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.536480 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:59Z","lastTransitionTime":"2026-02-16T15:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.668465 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.668511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.668565 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.668597 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.668623 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:59Z","lastTransitionTime":"2026-02-16T15:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.672243 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" event={"ID":"91e35405-0016-467d-9081-272eba8c8aa1","Type":"ContainerStarted","Data":"e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5"} Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.679598 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerStarted","Data":"b39db9dae1fa127363338bcc6bc83eaa8e8b1d727a53277b08f7f54783d4e974"} Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.680423 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.680607 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.687311 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.702064 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.707691 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.710060 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.715701 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.736171 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.746877 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.757216 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.770307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.770337 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.770348 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.770365 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.770376 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:59Z","lastTransitionTime":"2026-02-16T15:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.770617 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.780511 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.790671 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.799276 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.811181 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.822442 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.835719 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.855431 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.865701 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.873047 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.873079 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.873088 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.873102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.873112 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:59Z","lastTransitionTime":"2026-02-16T15:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.877708 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.889458 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.904071 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.914283 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.926461 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.938300 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.951302 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.960828 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.974861 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.974904 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.974915 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.974930 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.974942 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:07:59Z","lastTransitionTime":"2026-02-16T15:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:07:59 crc kubenswrapper[4835]: I0216 15:07:59.976893 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:07:59Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.002663 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39db9dae1fa127363338bcc6bc83eaa8e8b1d727a53277b08f7f54783d4e974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:00Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.015040 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:00Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.028779 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:00Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.041214 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:00Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.050446 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:00Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.067637 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:00Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.077006 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.077043 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.077057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.077073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.077485 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:00Z","lastTransitionTime":"2026-02-16T15:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.179557 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.179617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.179631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.179650 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.179663 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:00Z","lastTransitionTime":"2026-02-16T15:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.282568 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.282605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.282616 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.282630 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.282639 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:00Z","lastTransitionTime":"2026-02-16T15:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.323363 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 16:41:12.017674991 +0000 UTC Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.377753 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:00 crc kubenswrapper[4835]: E0216 15:08:00.377943 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.384586 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.384617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.384625 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.384638 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.384647 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:00Z","lastTransitionTime":"2026-02-16T15:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.487604 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.487649 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.487662 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.487678 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.487690 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:00Z","lastTransitionTime":"2026-02-16T15:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.589811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.590194 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.590350 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.590581 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.590779 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:00Z","lastTransitionTime":"2026-02-16T15:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.683049 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.692724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.692769 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.692788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.692810 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.692826 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:00Z","lastTransitionTime":"2026-02-16T15:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.794443 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.794494 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.794523 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.794619 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.794632 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:00Z","lastTransitionTime":"2026-02-16T15:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.897310 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.897612 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.897695 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.897777 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:00 crc kubenswrapper[4835]: I0216 15:08:00.897856 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:00Z","lastTransitionTime":"2026-02-16T15:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.000499 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.000560 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.000576 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.000592 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.000602 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:01Z","lastTransitionTime":"2026-02-16T15:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.103140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.103192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.103204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.103222 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.103234 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:01Z","lastTransitionTime":"2026-02-16T15:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.205862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.205934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.205956 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.205983 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.206003 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:01Z","lastTransitionTime":"2026-02-16T15:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.308910 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.308966 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.308982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.309004 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.309017 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:01Z","lastTransitionTime":"2026-02-16T15:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.324756 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 17:52:59.097666331 +0000 UTC Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.378320 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:01 crc kubenswrapper[4835]: E0216 15:08:01.378486 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.379056 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:01 crc kubenswrapper[4835]: E0216 15:08:01.379183 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.404546 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.412177 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.412210 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.412219 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.412232 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.412241 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:01Z","lastTransitionTime":"2026-02-16T15:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.418586 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.432349 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.445982 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.454591 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.466463 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.478121 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.490294 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.504924 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.517265 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.517473 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.517574 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.517680 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.517750 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:01Z","lastTransitionTime":"2026-02-16T15:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.518677 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.528393 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.540677 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.551133 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.565785 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.582328 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39db9dae1fa127363338bcc6bc83eaa8e8b1d727a53277b08f7f54783d4e974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.620175 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.620201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.620209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.620221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.620231 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:01Z","lastTransitionTime":"2026-02-16T15:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.684990 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.722667 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.722723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.722735 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.722763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.722772 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:01Z","lastTransitionTime":"2026-02-16T15:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.782494 4835 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.828703 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.828758 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.828771 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.828789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.828805 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:01Z","lastTransitionTime":"2026-02-16T15:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.930334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.930375 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.930384 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.930396 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:01 crc kubenswrapper[4835]: I0216 15:08:01.930405 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:01Z","lastTransitionTime":"2026-02-16T15:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.032856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.032897 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.032908 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.032925 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.032935 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:02Z","lastTransitionTime":"2026-02-16T15:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.152979 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.153010 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.153018 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.153031 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.153039 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:02Z","lastTransitionTime":"2026-02-16T15:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.256051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.256097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.256106 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.256124 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.256136 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:02Z","lastTransitionTime":"2026-02-16T15:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.325181 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 10:22:16.233375922 +0000 UTC Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.358367 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.358407 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.358420 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.358435 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.358446 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:02Z","lastTransitionTime":"2026-02-16T15:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.377700 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:02 crc kubenswrapper[4835]: E0216 15:08:02.377813 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.417302 4835 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.461200 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.461247 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.461259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.461276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.461288 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:02Z","lastTransitionTime":"2026-02-16T15:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.564114 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.564704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.564771 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.564831 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.564892 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:02Z","lastTransitionTime":"2026-02-16T15:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.666940 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.666966 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.667026 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.667040 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.667049 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:02Z","lastTransitionTime":"2026-02-16T15:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.689491 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovnkube-controller/0.log" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.693631 4835 generic.go:334] "Generic (PLEG): container finished" podID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerID="b39db9dae1fa127363338bcc6bc83eaa8e8b1d727a53277b08f7f54783d4e974" exitCode=1 Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.693663 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerDied","Data":"b39db9dae1fa127363338bcc6bc83eaa8e8b1d727a53277b08f7f54783d4e974"} Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.694242 4835 scope.go:117] "RemoveContainer" containerID="b39db9dae1fa127363338bcc6bc83eaa8e8b1d727a53277b08f7f54783d4e974" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.728371 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:02Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.741459 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:02Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.754200 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:02Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.766320 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:02Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.769362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.769461 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.769557 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.769630 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.769686 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:02Z","lastTransitionTime":"2026-02-16T15:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.776564 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:02Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.794701 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:02Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.808504 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:02Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.820671 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:02Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.832263 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:02Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.843462 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:02Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.856048 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:02Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.871839 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:02Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.872054 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.872069 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.872078 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.872091 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.872100 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:02Z","lastTransitionTime":"2026-02-16T15:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.881921 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:02Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.894594 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:02Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.911546 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39db9dae1fa127363338bcc6bc83eaa8e8b1d727a53277b08f7f54783d4e974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39db9dae1fa127363338bcc6bc83eaa8e8b1d727a53277b08f7f54783d4e974\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:02Z\\\",\\\"message\\\":\\\"216 15:08:02.104143 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 15:08:02.104156 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 15:08:02.104162 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 15:08:02.104479 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 15:08:02.104502 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 15:08:02.104514 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 15:08:02.104523 6188 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 15:08:02.104558 6188 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 15:08:02.104561 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 15:08:02.104571 6188 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 15:08:02.104575 6188 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 15:08:02.104577 6188 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 15:08:02.104582 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 15:08:02.104592 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 15:08:02.104605 6188 factory.go:656] Stopping watch factory\\\\nI0216 15:08:02.104617 6188 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:02Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.974785 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.974830 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.974842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.974858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:02 crc kubenswrapper[4835]: I0216 15:08:02.974869 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:02Z","lastTransitionTime":"2026-02-16T15:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.082814 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.082947 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.082976 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.083011 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.083035 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:03Z","lastTransitionTime":"2026-02-16T15:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.186706 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.186793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.186802 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.186817 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.186826 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:03Z","lastTransitionTime":"2026-02-16T15:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.289664 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.289693 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.289703 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.289714 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.289725 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:03Z","lastTransitionTime":"2026-02-16T15:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.325697 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 16:26:47.79795062 +0000 UTC Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.378558 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.378640 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:03 crc kubenswrapper[4835]: E0216 15:08:03.378680 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:03 crc kubenswrapper[4835]: E0216 15:08:03.378748 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.391173 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.391201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.391209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.391220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.391230 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:03Z","lastTransitionTime":"2026-02-16T15:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.493320 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.493347 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.493356 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.493370 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.493379 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:03Z","lastTransitionTime":"2026-02-16T15:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.597397 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.597447 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.597459 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.597475 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.597484 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:03Z","lastTransitionTime":"2026-02-16T15:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.697976 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovnkube-controller/1.log" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.698369 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovnkube-controller/0.log" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.698659 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.698696 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.698707 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.698726 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.698738 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:03Z","lastTransitionTime":"2026-02-16T15:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.700311 4835 generic.go:334] "Generic (PLEG): container finished" podID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerID="7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f" exitCode=1 Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.700341 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerDied","Data":"7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f"} Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.700371 4835 scope.go:117] "RemoveContainer" containerID="b39db9dae1fa127363338bcc6bc83eaa8e8b1d727a53277b08f7f54783d4e974" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.701656 4835 scope.go:117] "RemoveContainer" containerID="7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f" Feb 16 15:08:03 crc kubenswrapper[4835]: E0216 15:08:03.701828 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.721366 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.733753 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.746252 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.757562 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.767442 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.778296 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.787571 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.795944 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.800603 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.800633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.800641 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.800655 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.800665 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:03Z","lastTransitionTime":"2026-02-16T15:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.807398 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.817839 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.826753 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.836761 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.845666 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.857031 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.872217 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39db9dae1fa127363338bcc6bc83eaa8e8b1d727a53277b08f7f54783d4e974\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:02Z\\\",\\\"message\\\":\\\"216 15:08:02.104143 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 15:08:02.104156 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 15:08:02.104162 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 15:08:02.104479 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 15:08:02.104502 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 15:08:02.104514 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 15:08:02.104523 6188 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 15:08:02.104558 6188 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 15:08:02.104561 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 15:08:02.104571 6188 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 15:08:02.104575 6188 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 15:08:02.104577 6188 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 15:08:02.104582 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 15:08:02.104592 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 15:08:02.104605 6188 factory.go:656] Stopping watch factory\\\\nI0216 15:08:02.104617 6188 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:03Z\\\",\\\"message\\\":\\\":03.619560 6310 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}\\\\nI0216 15:08:03.619427 6310 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0216 15:08:03.619776 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:03.619776 6310 services_controller.go:451] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics cluster-wide LB for network=default: []services.L\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.902982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.903008 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.903017 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.903030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:03 crc kubenswrapper[4835]: I0216 15:08:03.903038 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:03Z","lastTransitionTime":"2026-02-16T15:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.005754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.006073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.006211 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.006350 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.006498 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:04Z","lastTransitionTime":"2026-02-16T15:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.109035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.109070 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.109082 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.109097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.109109 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:04Z","lastTransitionTime":"2026-02-16T15:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.211228 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.211269 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.211281 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.211298 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.211312 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:04Z","lastTransitionTime":"2026-02-16T15:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.313802 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.313840 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.313849 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.313862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.313871 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:04Z","lastTransitionTime":"2026-02-16T15:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.326132 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 18:38:14.044259252 +0000 UTC Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.377856 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:04 crc kubenswrapper[4835]: E0216 15:08:04.377972 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.419134 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.419190 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.419206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.419227 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.419243 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:04Z","lastTransitionTime":"2026-02-16T15:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.521709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.521779 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.521797 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.521821 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.521839 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:04Z","lastTransitionTime":"2026-02-16T15:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.624003 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.624281 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.624402 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.624523 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.624761 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:04Z","lastTransitionTime":"2026-02-16T15:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.704629 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovnkube-controller/1.log" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.727520 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.727577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.727586 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.727604 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.727614 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:04Z","lastTransitionTime":"2026-02-16T15:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.829881 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.830142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.830210 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.830275 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.830331 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:04Z","lastTransitionTime":"2026-02-16T15:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.933228 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.933264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.933273 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.933286 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:04 crc kubenswrapper[4835]: I0216 15:08:04.933296 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:04Z","lastTransitionTime":"2026-02-16T15:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.035656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.035695 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.035708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.035726 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.035739 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:05Z","lastTransitionTime":"2026-02-16T15:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.118291 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf"] Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.119381 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.121981 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.122166 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.138421 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.138492 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.138515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.138501 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.138581 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.138729 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:05Z","lastTransitionTime":"2026-02-16T15:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.149989 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.162675 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.174365 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.176671 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89wn8\" (UniqueName: \"kubernetes.io/projected/a25ef07f-df59-41c2-8ad5-fe6bdc50345a-kube-api-access-89wn8\") pod \"ovnkube-control-plane-749d76644c-2bssf\" (UID: \"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.176828 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a25ef07f-df59-41c2-8ad5-fe6bdc50345a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2bssf\" (UID: \"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.176950 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a25ef07f-df59-41c2-8ad5-fe6bdc50345a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2bssf\" (UID: \"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.177053 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a25ef07f-df59-41c2-8ad5-fe6bdc50345a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2bssf\" (UID: \"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.186154 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.205939 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.217153 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.231516 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.241757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.241797 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.241805 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.241820 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.241828 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:05Z","lastTransitionTime":"2026-02-16T15:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.243796 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.256120 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.266386 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.275132 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.277721 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a25ef07f-df59-41c2-8ad5-fe6bdc50345a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2bssf\" (UID: \"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.277836 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a25ef07f-df59-41c2-8ad5-fe6bdc50345a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2bssf\" (UID: \"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.277927 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a25ef07f-df59-41c2-8ad5-fe6bdc50345a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2bssf\" (UID: \"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.278014 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89wn8\" (UniqueName: \"kubernetes.io/projected/a25ef07f-df59-41c2-8ad5-fe6bdc50345a-kube-api-access-89wn8\") pod \"ovnkube-control-plane-749d76644c-2bssf\" (UID: \"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.278450 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a25ef07f-df59-41c2-8ad5-fe6bdc50345a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2bssf\" (UID: \"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.278590 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a25ef07f-df59-41c2-8ad5-fe6bdc50345a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2bssf\" (UID: \"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.283876 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a25ef07f-df59-41c2-8ad5-fe6bdc50345a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2bssf\" (UID: \"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.284982 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.293394 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89wn8\" (UniqueName: \"kubernetes.io/projected/a25ef07f-df59-41c2-8ad5-fe6bdc50345a-kube-api-access-89wn8\") pod \"ovnkube-control-plane-749d76644c-2bssf\" (UID: \"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.295421 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.309060 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.325132 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39db9dae1fa127363338bcc6bc83eaa8e8b1d727a53277b08f7f54783d4e974\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:02Z\\\",\\\"message\\\":\\\"216 15:08:02.104143 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 15:08:02.104156 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 15:08:02.104162 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 15:08:02.104479 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 15:08:02.104502 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 15:08:02.104514 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 15:08:02.104523 6188 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 15:08:02.104558 6188 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 15:08:02.104561 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 15:08:02.104571 6188 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 15:08:02.104575 6188 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 15:08:02.104577 6188 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 15:08:02.104582 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 15:08:02.104592 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 15:08:02.104605 6188 factory.go:656] Stopping watch factory\\\\nI0216 15:08:02.104617 6188 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:03Z\\\",\\\"message\\\":\\\":03.619560 6310 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}\\\\nI0216 15:08:03.619427 6310 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0216 15:08:03.619776 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:03.619776 6310 services_controller.go:451] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics cluster-wide LB for network=default: []services.L\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.327000 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 08:21:07.831340015 +0000 UTC Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.343838 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.343881 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.343890 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.343905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.343916 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:05Z","lastTransitionTime":"2026-02-16T15:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.378272 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.378332 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:05 crc kubenswrapper[4835]: E0216 15:08:05.378376 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:05 crc kubenswrapper[4835]: E0216 15:08:05.378457 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.440518 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.446306 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.446351 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.446362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.446378 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.446391 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:05Z","lastTransitionTime":"2026-02-16T15:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.548795 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.548834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.548844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.548859 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.548870 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:05Z","lastTransitionTime":"2026-02-16T15:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.651130 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.651165 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.651173 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.651187 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.651197 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:05Z","lastTransitionTime":"2026-02-16T15:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.711601 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" event={"ID":"a25ef07f-df59-41c2-8ad5-fe6bdc50345a","Type":"ContainerStarted","Data":"d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5"} Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.711651 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" event={"ID":"a25ef07f-df59-41c2-8ad5-fe6bdc50345a","Type":"ContainerStarted","Data":"345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653"} Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.711664 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" event={"ID":"a25ef07f-df59-41c2-8ad5-fe6bdc50345a","Type":"ContainerStarted","Data":"bf9253ab1d44b1d01942687de71e93c0106c7b8b3dfbef5e3e37da97b38e8f36"} Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.732520 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.750099 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39db9dae1fa127363338bcc6bc83eaa8e8b1d727a53277b08f7f54783d4e974\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:02Z\\\",\\\"message\\\":\\\"216 15:08:02.104143 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 15:08:02.104156 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 15:08:02.104162 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 15:08:02.104479 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 15:08:02.104502 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 15:08:02.104514 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 15:08:02.104523 6188 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 15:08:02.104558 6188 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 15:08:02.104561 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 15:08:02.104571 6188 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 15:08:02.104575 6188 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 15:08:02.104577 6188 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 15:08:02.104582 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 15:08:02.104592 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 15:08:02.104605 6188 factory.go:656] Stopping watch factory\\\\nI0216 15:08:02.104617 6188 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:03Z\\\",\\\"message\\\":\\\":03.619560 6310 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}\\\\nI0216 15:08:03.619427 6310 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0216 15:08:03.619776 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:03.619776 6310 services_controller.go:451] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics cluster-wide LB for network=default: []services.L\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.753639 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.753695 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.753711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.753733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.753750 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:05Z","lastTransitionTime":"2026-02-16T15:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.763652 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.777911 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.790428 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.807496 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.829995 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.843612 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.854470 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.855653 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.855683 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.855693 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.855709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.855721 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:05Z","lastTransitionTime":"2026-02-16T15:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.865554 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.881651 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.894384 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.907312 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.917188 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.929417 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.944145 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.957665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.957708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.957719 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.957734 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:05 crc kubenswrapper[4835]: I0216 15:08:05.957751 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:05Z","lastTransitionTime":"2026-02-16T15:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.059717 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.059811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.059833 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.059858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.059875 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:06Z","lastTransitionTime":"2026-02-16T15:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.162593 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.162636 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.162646 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.162664 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.162675 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:06Z","lastTransitionTime":"2026-02-16T15:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.206746 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-b5nkt"] Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.207296 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:06 crc kubenswrapper[4835]: E0216 15:08:06.207382 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.221506 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.253487 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39db9dae1fa127363338bcc6bc83eaa8e8b1d727a53277b08f7f54783d4e974\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:02Z\\\",\\\"message\\\":\\\"216 15:08:02.104143 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 15:08:02.104156 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 15:08:02.104162 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 15:08:02.104479 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 15:08:02.104502 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 15:08:02.104514 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 15:08:02.104523 6188 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 15:08:02.104558 6188 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 15:08:02.104561 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 15:08:02.104571 6188 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 15:08:02.104575 6188 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 15:08:02.104577 6188 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 15:08:02.104582 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 15:08:02.104592 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 15:08:02.104605 6188 factory.go:656] Stopping watch factory\\\\nI0216 15:08:02.104617 6188 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:03Z\\\",\\\"message\\\":\\\":03.619560 6310 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}\\\\nI0216 15:08:03.619427 6310 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0216 15:08:03.619776 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:03.619776 6310 services_controller.go:451] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics cluster-wide LB for network=default: []services.L\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.265349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.265392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.265403 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.265418 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.265429 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:06Z","lastTransitionTime":"2026-02-16T15:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.267290 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.286835 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.288499 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmkhj\" (UniqueName: \"kubernetes.io/projected/5121c96d-796f-46b5-8889-b7e74c329b2f-kube-api-access-rmkhj\") pod \"network-metrics-daemon-b5nkt\" (UID: \"5121c96d-796f-46b5-8889-b7e74c329b2f\") " pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.288656 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs\") pod \"network-metrics-daemon-b5nkt\" (UID: \"5121c96d-796f-46b5-8889-b7e74c329b2f\") " pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.304962 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.314562 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.324830 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.328031 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 04:15:19.061466354 +0000 UTC Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.347468 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.359179 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.368330 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.368406 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.368431 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.368465 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.368486 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:06Z","lastTransitionTime":"2026-02-16T15:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.373392 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.379677 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:06 crc kubenswrapper[4835]: E0216 15:08:06.379924 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.389486 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs\") pod \"network-metrics-daemon-b5nkt\" (UID: \"5121c96d-796f-46b5-8889-b7e74c329b2f\") " pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.389584 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmkhj\" (UniqueName: \"kubernetes.io/projected/5121c96d-796f-46b5-8889-b7e74c329b2f-kube-api-access-rmkhj\") pod \"network-metrics-daemon-b5nkt\" (UID: \"5121c96d-796f-46b5-8889-b7e74c329b2f\") " pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:06 crc kubenswrapper[4835]: E0216 15:08:06.389762 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 15:08:06 crc kubenswrapper[4835]: E0216 15:08:06.389884 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs podName:5121c96d-796f-46b5-8889-b7e74c329b2f nodeName:}" failed. No retries permitted until 2026-02-16 15:08:06.889849982 +0000 UTC m=+36.181842917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs") pod "network-metrics-daemon-b5nkt" (UID: "5121c96d-796f-46b5-8889-b7e74c329b2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.390695 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.403801 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.406390 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmkhj\" (UniqueName: \"kubernetes.io/projected/5121c96d-796f-46b5-8889-b7e74c329b2f-kube-api-access-rmkhj\") pod \"network-metrics-daemon-b5nkt\" (UID: \"5121c96d-796f-46b5-8889-b7e74c329b2f\") " pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.416661 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.429784 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.441072 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.461301 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.470694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.470769 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.470779 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.470815 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.470827 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:06Z","lastTransitionTime":"2026-02-16T15:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.476338 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.573602 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.573663 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.573671 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.573686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.573698 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:06Z","lastTransitionTime":"2026-02-16T15:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.675942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.675984 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.675993 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.676009 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.676019 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:06Z","lastTransitionTime":"2026-02-16T15:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.778487 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.778564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.778578 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.778594 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.778604 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:06Z","lastTransitionTime":"2026-02-16T15:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.880739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.880766 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.880774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.880786 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.880794 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:06Z","lastTransitionTime":"2026-02-16T15:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.893864 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs\") pod \"network-metrics-daemon-b5nkt\" (UID: \"5121c96d-796f-46b5-8889-b7e74c329b2f\") " pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:06 crc kubenswrapper[4835]: E0216 15:08:06.894032 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 15:08:06 crc kubenswrapper[4835]: E0216 15:08:06.894109 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs podName:5121c96d-796f-46b5-8889-b7e74c329b2f nodeName:}" failed. No retries permitted until 2026-02-16 15:08:07.894090655 +0000 UTC m=+37.186083540 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs") pod "network-metrics-daemon-b5nkt" (UID: "5121c96d-796f-46b5-8889-b7e74c329b2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.983174 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.983211 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.983221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.983238 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:06 crc kubenswrapper[4835]: I0216 15:08:06.983249 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:06Z","lastTransitionTime":"2026-02-16T15:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.085664 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.085691 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.085700 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.085712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.085722 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:07Z","lastTransitionTime":"2026-02-16T15:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.095664 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.095847 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:08:23.095833175 +0000 UTC m=+52.387826070 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.188741 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.188791 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.188805 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.188821 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.188833 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:07Z","lastTransitionTime":"2026-02-16T15:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.196552 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.196600 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.196630 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.196672 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.196742 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.196769 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.196743 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.196781 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.196804 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.196816 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 15:08:23.196803643 +0000 UTC m=+52.488796538 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.196816 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.196831 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 15:08:23.196824493 +0000 UTC m=+52.488817378 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.196781 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.196885 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.196857 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 15:08:23.196846104 +0000 UTC m=+52.488839009 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.196950 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 15:08:23.196935586 +0000 UTC m=+52.488928481 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.290815 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.290855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.290878 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.290893 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.290901 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:07Z","lastTransitionTime":"2026-02-16T15:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.328807 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 06:37:30.416001568 +0000 UTC Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.378620 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.378650 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.378812 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.378835 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.378924 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.379051 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.392970 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.393012 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.393022 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.393038 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.393048 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:07Z","lastTransitionTime":"2026-02-16T15:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.461470 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.461577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.461603 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.461629 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.461646 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:07Z","lastTransitionTime":"2026-02-16T15:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.474571 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:07Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.478867 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.478939 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.478950 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.478964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.478974 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:07Z","lastTransitionTime":"2026-02-16T15:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.493006 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:07Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.496585 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.496640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.496650 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.496662 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.496671 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:07Z","lastTransitionTime":"2026-02-16T15:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.509104 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:07Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.513030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.513068 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.513077 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.513092 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.513102 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:07Z","lastTransitionTime":"2026-02-16T15:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.524249 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:07Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.528021 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.528101 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.528139 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.528172 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.528196 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:07Z","lastTransitionTime":"2026-02-16T15:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.543552 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:07Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.543693 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.545107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.545154 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.545171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.545204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.545219 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:07Z","lastTransitionTime":"2026-02-16T15:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.648656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.648714 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.648732 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.648758 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.648775 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:07Z","lastTransitionTime":"2026-02-16T15:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.750791 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.750853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.750872 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.750896 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.750914 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:07Z","lastTransitionTime":"2026-02-16T15:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.853803 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.853852 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.853862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.853879 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.853887 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:07Z","lastTransitionTime":"2026-02-16T15:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.903906 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs\") pod \"network-metrics-daemon-b5nkt\" (UID: \"5121c96d-796f-46b5-8889-b7e74c329b2f\") " pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.904039 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 15:08:07 crc kubenswrapper[4835]: E0216 15:08:07.904114 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs podName:5121c96d-796f-46b5-8889-b7e74c329b2f nodeName:}" failed. No retries permitted until 2026-02-16 15:08:09.904096488 +0000 UTC m=+39.196089393 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs") pod "network-metrics-daemon-b5nkt" (UID: "5121c96d-796f-46b5-8889-b7e74c329b2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.956102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.956137 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.956145 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.956160 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:07 crc kubenswrapper[4835]: I0216 15:08:07.956169 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:07Z","lastTransitionTime":"2026-02-16T15:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.058419 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.058465 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.058475 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.058490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.058499 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:08Z","lastTransitionTime":"2026-02-16T15:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.160353 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.160441 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.160463 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.160489 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.160510 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:08Z","lastTransitionTime":"2026-02-16T15:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.263026 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.263066 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.263075 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.263089 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.263099 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:08Z","lastTransitionTime":"2026-02-16T15:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.329763 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:10:47.831439872 +0000 UTC Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.365724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.365759 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.365768 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.365781 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.365789 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:08Z","lastTransitionTime":"2026-02-16T15:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.378302 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:08 crc kubenswrapper[4835]: E0216 15:08:08.378432 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.467894 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.467946 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.467959 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.467977 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.467987 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:08Z","lastTransitionTime":"2026-02-16T15:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.570835 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.570883 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.570891 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.570908 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.570926 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:08Z","lastTransitionTime":"2026-02-16T15:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.673769 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.673803 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.673810 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.673824 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.673834 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:08Z","lastTransitionTime":"2026-02-16T15:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.776897 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.776937 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.776949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.776963 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.776972 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:08Z","lastTransitionTime":"2026-02-16T15:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.879458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.879580 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.879602 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.879632 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.879649 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:08Z","lastTransitionTime":"2026-02-16T15:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.982390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.982470 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.982494 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.982519 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:08 crc kubenswrapper[4835]: I0216 15:08:08.982581 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:08Z","lastTransitionTime":"2026-02-16T15:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.086017 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.086076 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.086090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.086108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.086120 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:09Z","lastTransitionTime":"2026-02-16T15:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.188931 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.188980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.188992 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.189012 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.189024 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:09Z","lastTransitionTime":"2026-02-16T15:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.292567 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.292650 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.292674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.292702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.292724 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:09Z","lastTransitionTime":"2026-02-16T15:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.330887 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 19:44:10.824399175 +0000 UTC Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.378581 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.378702 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:09 crc kubenswrapper[4835]: E0216 15:08:09.378843 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:09 crc kubenswrapper[4835]: E0216 15:08:09.379354 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.379459 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:09 crc kubenswrapper[4835]: E0216 15:08:09.379632 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.379706 4835 scope.go:117] "RemoveContainer" containerID="21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.395099 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.395159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.395178 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.395226 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.395257 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:09Z","lastTransitionTime":"2026-02-16T15:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.498086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.498163 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.498183 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.498205 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.498218 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:09Z","lastTransitionTime":"2026-02-16T15:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.600422 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.600673 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.600688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.600708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.600721 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:09Z","lastTransitionTime":"2026-02-16T15:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.703154 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.703224 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.703430 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.703483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.703500 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:09Z","lastTransitionTime":"2026-02-16T15:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.726314 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.728007 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19"} Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.729341 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.746179 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.758602 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.776303 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.786279 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.802692 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.806059 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.806097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.806132 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.806149 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.806161 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:09Z","lastTransitionTime":"2026-02-16T15:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.813141 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.828292 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.841375 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.853642 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.872600 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.885481 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.898462 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.908335 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.908377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.908387 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.908404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.908415 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:09Z","lastTransitionTime":"2026-02-16T15:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.913138 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.922386 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.924239 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs\") pod \"network-metrics-daemon-b5nkt\" (UID: \"5121c96d-796f-46b5-8889-b7e74c329b2f\") " pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:09 crc kubenswrapper[4835]: E0216 15:08:09.924416 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 15:08:09 crc kubenswrapper[4835]: E0216 15:08:09.924502 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs podName:5121c96d-796f-46b5-8889-b7e74c329b2f nodeName:}" failed. No retries permitted until 2026-02-16 15:08:13.924485851 +0000 UTC m=+43.216478746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs") pod "network-metrics-daemon-b5nkt" (UID: "5121c96d-796f-46b5-8889-b7e74c329b2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.932753 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.947365 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:09 crc kubenswrapper[4835]: I0216 15:08:09.967275 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39db9dae1fa127363338bcc6bc83eaa8e8b1d727a53277b08f7f54783d4e974\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:02Z\\\",\\\"message\\\":\\\"216 15:08:02.104143 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 15:08:02.104156 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 15:08:02.104162 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 15:08:02.104479 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 15:08:02.104502 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 15:08:02.104514 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 15:08:02.104523 6188 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 15:08:02.104558 6188 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 15:08:02.104561 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 15:08:02.104571 6188 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 15:08:02.104575 6188 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 15:08:02.104577 6188 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 15:08:02.104582 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 15:08:02.104592 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 15:08:02.104605 6188 factory.go:656] Stopping watch factory\\\\nI0216 15:08:02.104617 6188 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:03Z\\\",\\\"message\\\":\\\":03.619560 6310 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}\\\\nI0216 15:08:03.619427 6310 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0216 15:08:03.619776 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:03.619776 6310 services_controller.go:451] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics cluster-wide LB for network=default: []services.L\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.010859 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.010910 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.010926 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.010948 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.010964 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:10Z","lastTransitionTime":"2026-02-16T15:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.112857 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.112888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.112895 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.112907 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.112915 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:10Z","lastTransitionTime":"2026-02-16T15:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.214514 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.214569 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.214581 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.214600 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.214610 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:10Z","lastTransitionTime":"2026-02-16T15:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.315966 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.316012 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.316023 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.316038 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.316048 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:10Z","lastTransitionTime":"2026-02-16T15:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.331621 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:54:47.614733372 +0000 UTC Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.378015 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:10 crc kubenswrapper[4835]: E0216 15:08:10.378170 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.418299 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.418338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.418348 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.418363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.418375 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:10Z","lastTransitionTime":"2026-02-16T15:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.520934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.520978 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.520994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.521009 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.521019 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:10Z","lastTransitionTime":"2026-02-16T15:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.623219 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.623281 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.623293 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.623308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.623318 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:10Z","lastTransitionTime":"2026-02-16T15:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.726436 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.726491 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.726501 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.726515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.726524 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:10Z","lastTransitionTime":"2026-02-16T15:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.828378 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.828421 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.828435 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.828451 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.828463 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:10Z","lastTransitionTime":"2026-02-16T15:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.931797 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.931859 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.931877 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.931899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:10 crc kubenswrapper[4835]: I0216 15:08:10.931916 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:10Z","lastTransitionTime":"2026-02-16T15:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.035173 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.035225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.035242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.035261 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.035274 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:11Z","lastTransitionTime":"2026-02-16T15:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.137495 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.137624 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.137647 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.137671 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.137692 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:11Z","lastTransitionTime":"2026-02-16T15:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.239991 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.240051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.240067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.240091 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.240109 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:11Z","lastTransitionTime":"2026-02-16T15:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.332345 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 18:41:32.567514473 +0000 UTC Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.343372 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.343429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.343452 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.343482 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.343502 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:11Z","lastTransitionTime":"2026-02-16T15:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.377905 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.378041 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:11 crc kubenswrapper[4835]: E0216 15:08:11.378238 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.378293 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:11 crc kubenswrapper[4835]: E0216 15:08:11.378575 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:11 crc kubenswrapper[4835]: E0216 15:08:11.378682 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.398803 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.422196 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.446885 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.446941 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.446958 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.446982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.447000 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:11Z","lastTransitionTime":"2026-02-16T15:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.455661 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39db9dae1fa127363338bcc6bc83eaa8e8b1d727a53277b08f7f54783d4e974\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:02Z\\\",\\\"message\\\":\\\"216 15:08:02.104143 6188 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 15:08:02.104156 6188 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 15:08:02.104162 6188 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 15:08:02.104479 6188 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 15:08:02.104502 6188 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 15:08:02.104514 6188 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 15:08:02.104523 6188 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 15:08:02.104558 6188 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 15:08:02.104561 6188 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 15:08:02.104571 6188 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 15:08:02.104575 6188 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 15:08:02.104577 6188 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 15:08:02.104582 6188 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 15:08:02.104592 6188 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 15:08:02.104605 6188 factory.go:656] Stopping watch factory\\\\nI0216 15:08:02.104617 6188 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:03Z\\\",\\\"message\\\":\\\":03.619560 6310 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}\\\\nI0216 15:08:03.619427 6310 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0216 15:08:03.619776 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:03.619776 6310 services_controller.go:451] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics cluster-wide LB for network=default: []services.L\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.482013 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.498897 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.517611 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.532104 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.544403 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.550021 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.550078 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.550098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.550123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.550143 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:11Z","lastTransitionTime":"2026-02-16T15:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.561941 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.583826 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.601964 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.618201 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.633962 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.651964 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.652322 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.652367 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.652392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.652423 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.652447 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:11Z","lastTransitionTime":"2026-02-16T15:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.668074 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.682277 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.696011 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.755394 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.755433 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.755444 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.755460 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.755471 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:11Z","lastTransitionTime":"2026-02-16T15:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.858690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.858758 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.858776 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.858802 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.858819 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:11Z","lastTransitionTime":"2026-02-16T15:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.961387 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.961437 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.961451 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.961471 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:11 crc kubenswrapper[4835]: I0216 15:08:11.961486 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:11Z","lastTransitionTime":"2026-02-16T15:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.065090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.065149 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.065166 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.065188 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.065205 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:12Z","lastTransitionTime":"2026-02-16T15:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.168359 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.168409 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.168425 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.168447 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.168466 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:12Z","lastTransitionTime":"2026-02-16T15:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.271359 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.271755 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.271904 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.272067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.272202 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:12Z","lastTransitionTime":"2026-02-16T15:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.332872 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:02:39.74008174 +0000 UTC Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.375175 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.375229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.375240 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.375257 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.375269 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:12Z","lastTransitionTime":"2026-02-16T15:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.378492 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:12 crc kubenswrapper[4835]: E0216 15:08:12.378678 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.477255 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.477313 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.477333 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.477357 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.477375 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:12Z","lastTransitionTime":"2026-02-16T15:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.580019 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.580321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.580435 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.580553 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.580663 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:12Z","lastTransitionTime":"2026-02-16T15:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.683624 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.683665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.683674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.683690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.683700 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:12Z","lastTransitionTime":"2026-02-16T15:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.786134 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.786272 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.786296 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.786320 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.786339 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:12Z","lastTransitionTime":"2026-02-16T15:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.889752 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.889803 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.889823 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.889849 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.889865 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:12Z","lastTransitionTime":"2026-02-16T15:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.992997 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.993064 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.993087 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.993116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:12 crc kubenswrapper[4835]: I0216 15:08:12.993136 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:12Z","lastTransitionTime":"2026-02-16T15:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.096574 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.096622 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.096631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.096645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.096656 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:13Z","lastTransitionTime":"2026-02-16T15:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.199714 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.199776 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.199792 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.199816 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.199833 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:13Z","lastTransitionTime":"2026-02-16T15:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.303111 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.303181 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.303204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.303231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.303251 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:13Z","lastTransitionTime":"2026-02-16T15:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.333880 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:39:26.013730639 +0000 UTC Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.378789 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:13 crc kubenswrapper[4835]: E0216 15:08:13.378965 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.379476 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.379636 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:13 crc kubenswrapper[4835]: E0216 15:08:13.380137 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:13 crc kubenswrapper[4835]: E0216 15:08:13.380159 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.406666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.406990 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.407171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.407318 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.407453 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:13Z","lastTransitionTime":"2026-02-16T15:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.511051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.511466 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.511644 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.511819 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.511946 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:13Z","lastTransitionTime":"2026-02-16T15:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.615350 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.615479 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.615499 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.615522 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.615577 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:13Z","lastTransitionTime":"2026-02-16T15:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.718851 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.719045 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.719077 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.719108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.719144 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:13Z","lastTransitionTime":"2026-02-16T15:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.822267 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.822808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.822954 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.823141 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.823265 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:13Z","lastTransitionTime":"2026-02-16T15:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.926930 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.927242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.927416 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.927633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.927813 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:13Z","lastTransitionTime":"2026-02-16T15:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:13 crc kubenswrapper[4835]: I0216 15:08:13.966167 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs\") pod \"network-metrics-daemon-b5nkt\" (UID: \"5121c96d-796f-46b5-8889-b7e74c329b2f\") " pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:13 crc kubenswrapper[4835]: E0216 15:08:13.966375 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 15:08:13 crc kubenswrapper[4835]: E0216 15:08:13.967333 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs podName:5121c96d-796f-46b5-8889-b7e74c329b2f nodeName:}" failed. No retries permitted until 2026-02-16 15:08:21.966762407 +0000 UTC m=+51.258755342 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs") pod "network-metrics-daemon-b5nkt" (UID: "5121c96d-796f-46b5-8889-b7e74c329b2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.031745 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.031811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.031828 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.032331 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.032390 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:14Z","lastTransitionTime":"2026-02-16T15:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.136055 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.136107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.136126 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.136153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.136174 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:14Z","lastTransitionTime":"2026-02-16T15:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.238768 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.238849 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.238873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.238990 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.239078 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:14Z","lastTransitionTime":"2026-02-16T15:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.334403 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 17:09:40.958031576 +0000 UTC Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.341399 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.341448 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.341457 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.341475 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.341488 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:14Z","lastTransitionTime":"2026-02-16T15:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.378258 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:14 crc kubenswrapper[4835]: E0216 15:08:14.378410 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.444414 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.444458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.444473 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.444490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.444501 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:14Z","lastTransitionTime":"2026-02-16T15:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.547702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.547738 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.547749 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.547764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.547776 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:14Z","lastTransitionTime":"2026-02-16T15:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.651189 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.651285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.651304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.651327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.651348 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:14Z","lastTransitionTime":"2026-02-16T15:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.754134 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.754430 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.754438 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.754450 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.754459 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:14Z","lastTransitionTime":"2026-02-16T15:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.856808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.856923 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.857241 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.857558 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.857615 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:14Z","lastTransitionTime":"2026-02-16T15:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.960429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.960459 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.960467 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.960478 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:14 crc kubenswrapper[4835]: I0216 15:08:14.960487 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:14Z","lastTransitionTime":"2026-02-16T15:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.063335 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.063414 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.063438 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.063469 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.063492 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:15Z","lastTransitionTime":"2026-02-16T15:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.166633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.166680 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.166691 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.166712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.166723 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:15Z","lastTransitionTime":"2026-02-16T15:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.270495 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.270625 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.270644 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.270677 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.270698 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:15Z","lastTransitionTime":"2026-02-16T15:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.335438 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 01:02:00.320872295 +0000 UTC Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.373728 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.373774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.373786 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.373802 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.373812 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:15Z","lastTransitionTime":"2026-02-16T15:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.378033 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.378062 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:15 crc kubenswrapper[4835]: E0216 15:08:15.378157 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.378303 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:15 crc kubenswrapper[4835]: E0216 15:08:15.378293 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:15 crc kubenswrapper[4835]: E0216 15:08:15.378500 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.476389 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.476451 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.476468 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.476492 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.476511 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:15Z","lastTransitionTime":"2026-02-16T15:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.579089 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.579135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.579146 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.579164 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.579176 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:15Z","lastTransitionTime":"2026-02-16T15:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.681823 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.681875 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.681888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.681906 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.681924 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:15Z","lastTransitionTime":"2026-02-16T15:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.784754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.784791 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.784802 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.784820 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.784832 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:15Z","lastTransitionTime":"2026-02-16T15:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.887287 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.887668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.887885 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.888098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.888302 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:15Z","lastTransitionTime":"2026-02-16T15:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.991106 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.991155 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.991168 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.991192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:15 crc kubenswrapper[4835]: I0216 15:08:15.991207 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:15Z","lastTransitionTime":"2026-02-16T15:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.094227 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.094268 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.094281 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.094300 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.094313 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:16Z","lastTransitionTime":"2026-02-16T15:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.197184 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.197236 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.197252 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.197276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.197293 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:16Z","lastTransitionTime":"2026-02-16T15:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.300779 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.300848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.300865 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.300889 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.300907 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:16Z","lastTransitionTime":"2026-02-16T15:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.336363 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:35:53.460931242 +0000 UTC Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.378100 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:16 crc kubenswrapper[4835]: E0216 15:08:16.378342 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.403698 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.403735 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.403746 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.403761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.403772 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:16Z","lastTransitionTime":"2026-02-16T15:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.506116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.506175 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.506186 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.506206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.506219 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:16Z","lastTransitionTime":"2026-02-16T15:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.608595 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.608634 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.608646 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.608660 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.608670 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:16Z","lastTransitionTime":"2026-02-16T15:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.711001 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.711078 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.711096 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.711120 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.711140 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:16Z","lastTransitionTime":"2026-02-16T15:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.813878 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.813950 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.813975 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.814041 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.814070 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:16Z","lastTransitionTime":"2026-02-16T15:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.915723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.915778 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.915794 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.915816 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:16 crc kubenswrapper[4835]: I0216 15:08:16.915835 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:16Z","lastTransitionTime":"2026-02-16T15:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.019275 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.019340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.019354 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.019379 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.019396 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:17Z","lastTransitionTime":"2026-02-16T15:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.122694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.122759 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.122777 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.122804 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.122822 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:17Z","lastTransitionTime":"2026-02-16T15:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.226422 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.226593 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.226622 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.226656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.226679 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:17Z","lastTransitionTime":"2026-02-16T15:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.329961 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.330006 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.330017 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.330032 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.330045 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:17Z","lastTransitionTime":"2026-02-16T15:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.337419 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 19:53:26.809526856 +0000 UTC Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.378435 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.378507 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.378507 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:17 crc kubenswrapper[4835]: E0216 15:08:17.378718 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:17 crc kubenswrapper[4835]: E0216 15:08:17.378872 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:17 crc kubenswrapper[4835]: E0216 15:08:17.378955 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.433318 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.433378 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.433399 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.433425 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.433443 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:17Z","lastTransitionTime":"2026-02-16T15:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.537327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.537413 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.537430 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.537455 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.537470 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:17Z","lastTransitionTime":"2026-02-16T15:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.640703 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.640792 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.640808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.640833 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.640860 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:17Z","lastTransitionTime":"2026-02-16T15:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.719482 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.719613 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.719625 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.719641 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.719653 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:17Z","lastTransitionTime":"2026-02-16T15:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:17 crc kubenswrapper[4835]: E0216 15:08:17.735226 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:17Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.741666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.741719 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.741733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.741753 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.741766 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:17Z","lastTransitionTime":"2026-02-16T15:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:17 crc kubenswrapper[4835]: E0216 15:08:17.755843 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:17Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.762432 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.762485 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.762498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.762520 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.762555 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:17Z","lastTransitionTime":"2026-02-16T15:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:17 crc kubenswrapper[4835]: E0216 15:08:17.778327 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:17Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.783205 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.783236 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.783247 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.783263 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.783277 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:17Z","lastTransitionTime":"2026-02-16T15:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:17 crc kubenswrapper[4835]: E0216 15:08:17.801873 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:17Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.807178 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.807253 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.807271 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.807302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.807328 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:17Z","lastTransitionTime":"2026-02-16T15:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:17 crc kubenswrapper[4835]: E0216 15:08:17.822082 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:17Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:17 crc kubenswrapper[4835]: E0216 15:08:17.822238 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.823718 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.823754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.823767 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.823790 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.823803 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:17Z","lastTransitionTime":"2026-02-16T15:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.926606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.926659 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.926674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.926693 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:17 crc kubenswrapper[4835]: I0216 15:08:17.926705 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:17Z","lastTransitionTime":"2026-02-16T15:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.029135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.029212 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.029240 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.029276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.029302 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:18Z","lastTransitionTime":"2026-02-16T15:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.132003 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.132066 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.132080 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.132102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.132115 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:18Z","lastTransitionTime":"2026-02-16T15:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.234399 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.234445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.234459 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.234477 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.234489 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:18Z","lastTransitionTime":"2026-02-16T15:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.337141 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.337251 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.337270 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.337312 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.337330 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:18Z","lastTransitionTime":"2026-02-16T15:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.337525 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:45:07.925660662 +0000 UTC Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.377768 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:18 crc kubenswrapper[4835]: E0216 15:08:18.377940 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.378980 4835 scope.go:117] "RemoveContainer" containerID="7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.396134 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.424206 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.440970 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.441020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.441033 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.441057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.441070 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:18Z","lastTransitionTime":"2026-02-16T15:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.453214 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.476018 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.493586 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.511025 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.530218 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.547166 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.559659 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.570251 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.583057 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.594765 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.608304 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.616611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.616645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.616702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.616716 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.616725 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:18Z","lastTransitionTime":"2026-02-16T15:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.623178 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.635319 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.649815 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.669469 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:03Z\\\",\\\"message\\\":\\\":03.619560 6310 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}\\\\nI0216 15:08:03.619427 6310 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0216 15:08:03.619776 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:03.619776 6310 services_controller.go:451] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics cluster-wide LB for network=default: []services.L\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.719640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.719667 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.719677 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.719690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.719699 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:18Z","lastTransitionTime":"2026-02-16T15:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.763264 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovnkube-controller/1.log" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.765923 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerStarted","Data":"8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627"} Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.766064 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.776133 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.784154 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.802211 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.814233 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.822639 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.822775 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.822853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.822936 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.823006 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:18Z","lastTransitionTime":"2026-02-16T15:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.828308 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.845791 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.864281 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.882190 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.898299 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.915323 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.925649 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.925699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.925712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.925734 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.925752 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:18Z","lastTransitionTime":"2026-02-16T15:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.932903 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.944845 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.957800 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.968722 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.979610 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:18 crc kubenswrapper[4835]: I0216 15:08:18.995501 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:18Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.013831 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:03Z\\\",\\\"message\\\":\\\":03.619560 6310 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}\\\\nI0216 15:08:03.619427 6310 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0216 15:08:03.619776 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:03.619776 6310 services_controller.go:451] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics cluster-wide LB for network=default: []services.L\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.028009 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.028233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.028309 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.028401 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.028505 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:19Z","lastTransitionTime":"2026-02-16T15:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.131417 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.131783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.131863 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.131962 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.132061 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:19Z","lastTransitionTime":"2026-02-16T15:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.234610 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.234670 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.234683 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.234700 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.234716 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:19Z","lastTransitionTime":"2026-02-16T15:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.337305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.337389 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.337409 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.337439 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.337463 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:19Z","lastTransitionTime":"2026-02-16T15:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.337794 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 23:47:32.243107352 +0000 UTC Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.378766 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.378802 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:19 crc kubenswrapper[4835]: E0216 15:08:19.378897 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.378967 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:19 crc kubenswrapper[4835]: E0216 15:08:19.379127 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:19 crc kubenswrapper[4835]: E0216 15:08:19.379318 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.405818 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.440093 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.440126 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.440135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.440147 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.440156 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:19Z","lastTransitionTime":"2026-02-16T15:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.542695 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.542975 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.543039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.543116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.543179 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:19Z","lastTransitionTime":"2026-02-16T15:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.645999 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.646079 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.646103 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.646135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.646156 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:19Z","lastTransitionTime":"2026-02-16T15:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.748864 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.748921 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.748939 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.748962 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.748979 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:19Z","lastTransitionTime":"2026-02-16T15:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.770436 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovnkube-controller/2.log" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.771410 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovnkube-controller/1.log" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.774355 4835 generic.go:334] "Generic (PLEG): container finished" podID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerID="8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627" exitCode=1 Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.774409 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerDied","Data":"8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627"} Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.774462 4835 scope.go:117] "RemoveContainer" containerID="7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.775269 4835 scope.go:117] "RemoveContainer" containerID="8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627" Feb 16 15:08:19 crc kubenswrapper[4835]: E0216 15:08:19.775557 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.800356 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.831142 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.847837 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.851382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.851490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.851604 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.851691 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.851829 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:19Z","lastTransitionTime":"2026-02-16T15:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.868935 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.884517 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.895869 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.911684 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.928244 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.943137 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.954871 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.955107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.955198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.955304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.955418 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:19Z","lastTransitionTime":"2026-02-16T15:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.958400 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.972913 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.986407 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:19 crc kubenswrapper[4835]: I0216 15:08:19.997213 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.009342 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.019516 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.031846 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.056054 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:03Z\\\",\\\"message\\\":\\\":03.619560 6310 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}\\\\nI0216 15:08:03.619427 6310 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0216 15:08:03.619776 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:03.619776 6310 services_controller.go:451] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics cluster-wide LB for network=default: []services.L\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:19Z\\\",\\\"message\\\":\\\"{0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:19.463591 6556 services_controller.go:443] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.233\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0216 15:08:19.463602 6556 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0216 15:08:19.463577 6556 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.057938 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.057973 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.058009 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.058025 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.058038 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:20Z","lastTransitionTime":"2026-02-16T15:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.061179 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.072931 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.084078 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.094349 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.106500 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.116963 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.134030 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.153598 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7810dfca1b0a5854b15e0e508e446ac14cdf0f602bd18e00967f3dc78f657b1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:03Z\\\",\\\"message\\\":\\\":03.619560 6310 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}\\\\nI0216 15:08:03.619427 6310 services_controller.go:356] Processing sync for service openshift-machine-api/cluster-autoscaler-operator for network=default\\\\nF0216 15:08:03.619776 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:03Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:03.619776 6310 services_controller.go:451] Built service openshift-operator-lifecycle-manager/package-server-manager-metrics cluster-wide LB for network=default: []services.L\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:19Z\\\",\\\"message\\\":\\\"{0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:19.463591 6556 services_controller.go:443] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.233\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0216 15:08:19.463602 6556 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0216 15:08:19.463577 6556 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.160332 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.160557 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.160726 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.160850 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.160988 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:20Z","lastTransitionTime":"2026-02-16T15:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.175397 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.191641 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.206212 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.223797 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.233988 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.248725 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.263057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.263151 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.263175 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.263199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.263224 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:20Z","lastTransitionTime":"2026-02-16T15:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.267952 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.281131 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.293597 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.305147 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.338728 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 09:47:12.061262472 +0000 UTC Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.365416 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.365457 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.365471 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.365493 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.365508 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:20Z","lastTransitionTime":"2026-02-16T15:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.377817 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:20 crc kubenswrapper[4835]: E0216 15:08:20.377903 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.467744 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.467815 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.467831 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.467955 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.467975 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:20Z","lastTransitionTime":"2026-02-16T15:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.574623 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.574678 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.574692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.574709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.574721 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:20Z","lastTransitionTime":"2026-02-16T15:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.677743 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.677815 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.677839 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.677871 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.677891 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:20Z","lastTransitionTime":"2026-02-16T15:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.779409 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovnkube-controller/2.log" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.780255 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.780290 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.780301 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.780316 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.780328 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:20Z","lastTransitionTime":"2026-02-16T15:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.784030 4835 scope.go:117] "RemoveContainer" containerID="8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627" Feb 16 15:08:20 crc kubenswrapper[4835]: E0216 15:08:20.784189 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.803718 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.820824 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.831907 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.881452 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.883004 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.883061 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.883077 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.883104 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.883124 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:20Z","lastTransitionTime":"2026-02-16T15:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.903970 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.921069 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.931630 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.942165 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.943396 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:19Z\\\",\\\"message\\\":\\\"{0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:19.463591 6556 services_controller.go:443] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.233\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0216 15:08:19.463602 6556 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0216 15:08:19.463577 6556 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.964316 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.977817 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.986219 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.986264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.986279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.986301 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.986315 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:20Z","lastTransitionTime":"2026-02-16T15:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:20 crc kubenswrapper[4835]: I0216 15:08:20.991723 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:20Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.007439 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.020630 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.036206 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.053996 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.069565 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.087940 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.089422 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.089469 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.089481 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.089502 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.089515 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:21Z","lastTransitionTime":"2026-02-16T15:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.104697 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.122471 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.141399 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.167313 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:19Z\\\",\\\"message\\\":\\\"{0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:19.463591 6556 services_controller.go:443] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.233\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0216 15:08:19.463602 6556 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0216 15:08:19.463577 6556 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.182596 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87cacd68-0bbf-43de-bae3-e9ed31d19fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e174cfe63ecdfdaaa7051f8af8164e00f8295e42caf803bfe07fe758999af296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f206bec33670fb3e912d933cf602a51c92b99fba2802d3c1fe79b1cd920c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b486eb5e8108cd7a9fb09f21e0bb25f8483521b95acbbb42bbb1b7078fc8c030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.192238 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.192276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.192286 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.192301 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.192311 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:21Z","lastTransitionTime":"2026-02-16T15:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.204277 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.219961 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.239599 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.259911 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.272023 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.291445 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.294856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.294894 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.294903 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.294918 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.294927 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:21Z","lastTransitionTime":"2026-02-16T15:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.305624 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.316442 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.325459 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.336363 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.339223 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 19:34:22.25862785 +0000 UTC Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.348584 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.364432 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.375446 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.377872 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.377926 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:21 crc kubenswrapper[4835]: E0216 15:08:21.377951 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:21 crc kubenswrapper[4835]: E0216 15:08:21.378046 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.378171 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:21 crc kubenswrapper[4835]: E0216 15:08:21.378236 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.389321 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.398134 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.398190 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.398203 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.398219 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.398229 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:21Z","lastTransitionTime":"2026-02-16T15:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.401741 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.411679 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.420291 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.431111 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.440760 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.455517 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.474727 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:19Z\\\",\\\"message\\\":\\\"{0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:19.463591 6556 services_controller.go:443] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.233\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0216 15:08:19.463602 6556 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0216 15:08:19.463577 6556 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.487741 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87cacd68-0bbf-43de-bae3-e9ed31d19fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e174cfe63ecdfdaaa7051f8af8164e00f8295e42caf803bfe07fe758999af296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f206bec33670fb3e912d933cf602a51c92b99fba2802d3c1fe79b1cd920c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b486eb5e8108cd7a9fb09f21e0bb25f8483521b95acbbb42bbb1b7078fc8c030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.497405 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.499671 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.499697 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.499708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.499723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.499736 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:21Z","lastTransitionTime":"2026-02-16T15:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.508964 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.519453 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.529932 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.538436 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.556115 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.570103 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.579731 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.588705 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.599009 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:21Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.601383 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.601421 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.601445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.601458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.601466 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:21Z","lastTransitionTime":"2026-02-16T15:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.704382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.704686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.704758 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.704846 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.704928 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:21Z","lastTransitionTime":"2026-02-16T15:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.807368 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.807407 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.807415 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.807429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.807439 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:21Z","lastTransitionTime":"2026-02-16T15:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.910647 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.910706 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.910729 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.910756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:21 crc kubenswrapper[4835]: I0216 15:08:21.910777 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:21Z","lastTransitionTime":"2026-02-16T15:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.013414 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.013465 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.013481 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.013502 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.013518 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:22Z","lastTransitionTime":"2026-02-16T15:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.052917 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs\") pod \"network-metrics-daemon-b5nkt\" (UID: \"5121c96d-796f-46b5-8889-b7e74c329b2f\") " pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:22 crc kubenswrapper[4835]: E0216 15:08:22.053335 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 15:08:22 crc kubenswrapper[4835]: E0216 15:08:22.053604 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs podName:5121c96d-796f-46b5-8889-b7e74c329b2f nodeName:}" failed. No retries permitted until 2026-02-16 15:08:38.053577169 +0000 UTC m=+67.345570094 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs") pod "network-metrics-daemon-b5nkt" (UID: "5121c96d-796f-46b5-8889-b7e74c329b2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.116437 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.116788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.116922 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.117044 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.117165 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:22Z","lastTransitionTime":"2026-02-16T15:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.219353 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.219398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.219407 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.219420 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.219430 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:22Z","lastTransitionTime":"2026-02-16T15:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.321788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.321834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.321845 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.321861 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.321870 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:22Z","lastTransitionTime":"2026-02-16T15:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.340018 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:45:55.691502412 +0000 UTC Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.378476 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:22 crc kubenswrapper[4835]: E0216 15:08:22.378634 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.423848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.423899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.423910 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.423927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.423938 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:22Z","lastTransitionTime":"2026-02-16T15:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.526039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.526076 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.526086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.526102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.526113 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:22Z","lastTransitionTime":"2026-02-16T15:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.629003 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.629034 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.629042 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.629054 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.629063 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:22Z","lastTransitionTime":"2026-02-16T15:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.731086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.731312 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.731323 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.731338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.731349 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:22Z","lastTransitionTime":"2026-02-16T15:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.833888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.834167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.834254 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.834332 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.834437 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:22Z","lastTransitionTime":"2026-02-16T15:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.936400 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.936432 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.936442 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.936454 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:22 crc kubenswrapper[4835]: I0216 15:08:22.936463 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:22Z","lastTransitionTime":"2026-02-16T15:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.038273 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.038332 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.038343 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.038355 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.038363 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:23Z","lastTransitionTime":"2026-02-16T15:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.140459 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.140550 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.140562 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.140577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.140588 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:23Z","lastTransitionTime":"2026-02-16T15:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.166104 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:08:23 crc kubenswrapper[4835]: E0216 15:08:23.166266 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:08:55.166242015 +0000 UTC m=+84.458234910 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.243260 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.243304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.243321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.243342 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.243356 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:23Z","lastTransitionTime":"2026-02-16T15:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.267741 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.268114 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.268229 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.268379 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:23 crc kubenswrapper[4835]: E0216 15:08:23.267917 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 15:08:23 crc kubenswrapper[4835]: E0216 15:08:23.268687 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 15:08:23 crc kubenswrapper[4835]: E0216 15:08:23.268778 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:08:23 crc kubenswrapper[4835]: E0216 15:08:23.268928 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 15:08:55.268907552 +0000 UTC m=+84.560900467 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:08:23 crc kubenswrapper[4835]: E0216 15:08:23.268297 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 15:08:23 crc kubenswrapper[4835]: E0216 15:08:23.269742 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 15:08:23 crc kubenswrapper[4835]: E0216 15:08:23.268332 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 15:08:23 crc kubenswrapper[4835]: E0216 15:08:23.268500 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 15:08:23 crc kubenswrapper[4835]: E0216 15:08:23.269875 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:08:23 crc kubenswrapper[4835]: E0216 15:08:23.269922 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 15:08:55.269912799 +0000 UTC m=+84.561905694 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:08:23 crc kubenswrapper[4835]: E0216 15:08:23.269937 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 15:08:55.26993124 +0000 UTC m=+84.561924135 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 15:08:23 crc kubenswrapper[4835]: E0216 15:08:23.270014 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 15:08:55.269972091 +0000 UTC m=+84.561965016 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.340901 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:48:59.904397743 +0000 UTC Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.346314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.346352 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.346363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.346394 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.346403 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:23Z","lastTransitionTime":"2026-02-16T15:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.378131 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.378131 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.378356 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:23 crc kubenswrapper[4835]: E0216 15:08:23.378424 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:23 crc kubenswrapper[4835]: E0216 15:08:23.378697 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:23 crc kubenswrapper[4835]: E0216 15:08:23.378925 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.449772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.450169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.450179 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.450194 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.450203 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:23Z","lastTransitionTime":"2026-02-16T15:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.552942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.552982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.552990 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.553004 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.553013 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:23Z","lastTransitionTime":"2026-02-16T15:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.655564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.655605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.655617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.655632 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.655640 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:23Z","lastTransitionTime":"2026-02-16T15:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.759779 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.759861 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.759877 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.759895 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.759907 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:23Z","lastTransitionTime":"2026-02-16T15:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.862226 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.862267 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.862276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.862295 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.862306 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:23Z","lastTransitionTime":"2026-02-16T15:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.964924 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.964961 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.964969 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.964984 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:23 crc kubenswrapper[4835]: I0216 15:08:23.964993 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:23Z","lastTransitionTime":"2026-02-16T15:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.067856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.067886 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.067898 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.067949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.067994 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:24Z","lastTransitionTime":"2026-02-16T15:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.170299 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.170339 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.170350 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.170367 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.170381 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:24Z","lastTransitionTime":"2026-02-16T15:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.273296 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.273327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.273336 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.273348 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.273357 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:24Z","lastTransitionTime":"2026-02-16T15:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.341795 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:08:42.345552441 +0000 UTC Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.375771 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.375812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.375821 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.375836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.375848 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:24Z","lastTransitionTime":"2026-02-16T15:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.378224 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:24 crc kubenswrapper[4835]: E0216 15:08:24.378326 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.478966 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.479027 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.479037 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.479052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.479064 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:24Z","lastTransitionTime":"2026-02-16T15:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.581390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.581452 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.581473 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.581495 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.581511 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:24Z","lastTransitionTime":"2026-02-16T15:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.685348 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.685407 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.685418 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.685436 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.685450 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:24Z","lastTransitionTime":"2026-02-16T15:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.788479 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.788571 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.788590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.788613 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.788631 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:24Z","lastTransitionTime":"2026-02-16T15:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.890634 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.890666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.890675 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.890690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.890701 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:24Z","lastTransitionTime":"2026-02-16T15:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.993446 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.993498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.993516 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.993584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:24 crc kubenswrapper[4835]: I0216 15:08:24.993604 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:24Z","lastTransitionTime":"2026-02-16T15:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.096710 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.096766 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.096783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.096814 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.096833 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:25Z","lastTransitionTime":"2026-02-16T15:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.200418 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.200503 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.200557 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.200587 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.200609 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:25Z","lastTransitionTime":"2026-02-16T15:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.304340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.304407 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.304427 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.304455 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.304475 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:25Z","lastTransitionTime":"2026-02-16T15:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.342522 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:04:49.420794689 +0000 UTC Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.377996 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:25 crc kubenswrapper[4835]: E0216 15:08:25.378189 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.378585 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.378762 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:25 crc kubenswrapper[4835]: E0216 15:08:25.378869 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:25 crc kubenswrapper[4835]: E0216 15:08:25.378978 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.406907 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.406963 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.406987 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.407020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.407051 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:25Z","lastTransitionTime":"2026-02-16T15:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.510749 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.510822 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.510839 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.510868 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.510887 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:25Z","lastTransitionTime":"2026-02-16T15:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.613851 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.613901 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.613913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.613929 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.613940 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:25Z","lastTransitionTime":"2026-02-16T15:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.716957 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.717028 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.717053 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.717090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.717117 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:25Z","lastTransitionTime":"2026-02-16T15:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.819733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.819783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.819792 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.819810 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.819819 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:25Z","lastTransitionTime":"2026-02-16T15:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.923306 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.923379 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.923397 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.923428 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:25 crc kubenswrapper[4835]: I0216 15:08:25.923450 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:25Z","lastTransitionTime":"2026-02-16T15:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.030611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.030659 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.030672 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.030686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.030698 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:26Z","lastTransitionTime":"2026-02-16T15:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.133374 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.133452 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.133469 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.133497 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.133516 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:26Z","lastTransitionTime":"2026-02-16T15:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.237035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.237087 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.237102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.237124 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.237140 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:26Z","lastTransitionTime":"2026-02-16T15:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.339655 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.339790 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.339812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.339843 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.339898 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:26Z","lastTransitionTime":"2026-02-16T15:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.342975 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:19:16.729268379 +0000 UTC Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.378304 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:26 crc kubenswrapper[4835]: E0216 15:08:26.378486 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.442779 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.442828 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.442836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.442859 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.442871 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:26Z","lastTransitionTime":"2026-02-16T15:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.545221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.545275 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.545283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.545295 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.545304 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:26Z","lastTransitionTime":"2026-02-16T15:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.648790 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.648839 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.648850 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.648869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.648888 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:26Z","lastTransitionTime":"2026-02-16T15:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.752040 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.752118 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.752136 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.752166 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.752185 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:26Z","lastTransitionTime":"2026-02-16T15:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.856198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.856261 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.856273 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.856292 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.856307 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:26Z","lastTransitionTime":"2026-02-16T15:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.959777 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.959838 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.959856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.959885 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:26 crc kubenswrapper[4835]: I0216 15:08:26.959909 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:26Z","lastTransitionTime":"2026-02-16T15:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.062891 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.062975 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.062997 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.063026 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.063047 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:27Z","lastTransitionTime":"2026-02-16T15:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.166411 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.166522 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.166614 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.166650 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.166693 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:27Z","lastTransitionTime":"2026-02-16T15:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.271421 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.271490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.271505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.271546 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.271569 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:27Z","lastTransitionTime":"2026-02-16T15:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.343927 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:03:48.40597802 +0000 UTC Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.374878 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.374949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.374967 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.374995 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.375015 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:27Z","lastTransitionTime":"2026-02-16T15:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.378370 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.378364 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:27 crc kubenswrapper[4835]: E0216 15:08:27.378646 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.378830 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:27 crc kubenswrapper[4835]: E0216 15:08:27.379053 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:27 crc kubenswrapper[4835]: E0216 15:08:27.379215 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.478735 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.478783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.478801 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.478824 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.478842 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:27Z","lastTransitionTime":"2026-02-16T15:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.582243 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.582313 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.582334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.582357 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.582374 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:27Z","lastTransitionTime":"2026-02-16T15:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.685049 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.685090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.685098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.685112 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.685121 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:27Z","lastTransitionTime":"2026-02-16T15:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.789145 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.789213 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.789227 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.789245 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.789257 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:27Z","lastTransitionTime":"2026-02-16T15:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.892739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.892820 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.892842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.892869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.892888 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:27Z","lastTransitionTime":"2026-02-16T15:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.909242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.909302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.909315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.909336 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.909354 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:27Z","lastTransitionTime":"2026-02-16T15:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:27 crc kubenswrapper[4835]: E0216 15:08:27.928641 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:27Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.933707 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.933761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.933779 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.934176 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.934248 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:27Z","lastTransitionTime":"2026-02-16T15:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:27 crc kubenswrapper[4835]: E0216 15:08:27.955146 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:27Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.961228 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.961278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.961294 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.961318 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.961334 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:27Z","lastTransitionTime":"2026-02-16T15:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:27 crc kubenswrapper[4835]: E0216 15:08:27.979973 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:27Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.985808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.985902 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.985919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.985940 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:27 crc kubenswrapper[4835]: I0216 15:08:27.985988 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:27Z","lastTransitionTime":"2026-02-16T15:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:28 crc kubenswrapper[4835]: E0216 15:08:28.008566 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:28Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.014756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.014820 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.014842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.014869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.014893 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:28Z","lastTransitionTime":"2026-02-16T15:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:28 crc kubenswrapper[4835]: E0216 15:08:28.036854 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:28Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:28 crc kubenswrapper[4835]: E0216 15:08:28.037187 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.040484 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.040580 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.040607 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.040638 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.040659 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:28Z","lastTransitionTime":"2026-02-16T15:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.142590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.142616 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.142623 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.142635 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.142645 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:28Z","lastTransitionTime":"2026-02-16T15:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.245344 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.245382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.245393 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.245408 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.245419 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:28Z","lastTransitionTime":"2026-02-16T15:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.344802 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:58:14.449913997 +0000 UTC Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.348364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.348437 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.348449 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.348489 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.348507 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:28Z","lastTransitionTime":"2026-02-16T15:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.377874 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:28 crc kubenswrapper[4835]: E0216 15:08:28.378036 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.451434 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.451503 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.451525 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.451584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.451605 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:28Z","lastTransitionTime":"2026-02-16T15:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.555283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.555587 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.555608 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.555631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.555652 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:28Z","lastTransitionTime":"2026-02-16T15:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.658258 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.658311 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.658327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.658350 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.658370 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:28Z","lastTransitionTime":"2026-02-16T15:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.761862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.761918 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.761934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.761955 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.761972 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:28Z","lastTransitionTime":"2026-02-16T15:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.865341 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.865410 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.865419 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.865458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.865469 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:28Z","lastTransitionTime":"2026-02-16T15:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.969446 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.969574 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.969587 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.969613 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:28 crc kubenswrapper[4835]: I0216 15:08:28.969631 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:28Z","lastTransitionTime":"2026-02-16T15:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.071817 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.071851 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.071859 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.071872 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.071881 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:29Z","lastTransitionTime":"2026-02-16T15:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.174507 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.174570 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.174578 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.174592 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.174603 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:29Z","lastTransitionTime":"2026-02-16T15:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.276935 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.276977 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.276988 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.277000 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.277009 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:29Z","lastTransitionTime":"2026-02-16T15:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.345828 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 07:45:34.956604801 +0000 UTC Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.377587 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:29 crc kubenswrapper[4835]: E0216 15:08:29.377708 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.377760 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.377924 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:29 crc kubenswrapper[4835]: E0216 15:08:29.377928 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:29 crc kubenswrapper[4835]: E0216 15:08:29.377992 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.379348 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.379425 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.379442 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.379469 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.379494 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:29Z","lastTransitionTime":"2026-02-16T15:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.482880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.482989 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.483011 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.483044 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.483068 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:29Z","lastTransitionTime":"2026-02-16T15:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.586057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.586120 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.586142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.586171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.586194 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:29Z","lastTransitionTime":"2026-02-16T15:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.688236 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.688270 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.688277 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.688308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.688317 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:29Z","lastTransitionTime":"2026-02-16T15:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.791362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.791420 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.791434 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.791452 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.791464 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:29Z","lastTransitionTime":"2026-02-16T15:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.894733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.894874 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.894900 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.894935 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.894959 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:29Z","lastTransitionTime":"2026-02-16T15:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.999111 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.999197 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.999231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.999267 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:29 crc kubenswrapper[4835]: I0216 15:08:29.999294 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:29Z","lastTransitionTime":"2026-02-16T15:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.102806 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.102886 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.102905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.102935 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.102958 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:30Z","lastTransitionTime":"2026-02-16T15:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.206342 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.206429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.206449 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.206475 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.206494 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:30Z","lastTransitionTime":"2026-02-16T15:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.309362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.309436 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.309463 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.309497 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.309515 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:30Z","lastTransitionTime":"2026-02-16T15:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.346316 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:34:01.146379179 +0000 UTC Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.378027 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:30 crc kubenswrapper[4835]: E0216 15:08:30.378190 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.411808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.411881 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.411902 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.411930 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.411950 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:30Z","lastTransitionTime":"2026-02-16T15:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.514991 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.515069 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.515092 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.515129 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.515148 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:30Z","lastTransitionTime":"2026-02-16T15:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.618822 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.618877 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.618891 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.618913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.618928 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:30Z","lastTransitionTime":"2026-02-16T15:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.721751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.721832 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.721844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.721861 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.721879 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:30Z","lastTransitionTime":"2026-02-16T15:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.824041 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.824104 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.824120 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.824138 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.824151 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:30Z","lastTransitionTime":"2026-02-16T15:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.926856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.926924 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.926942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.926968 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:30 crc kubenswrapper[4835]: I0216 15:08:30.926989 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:30Z","lastTransitionTime":"2026-02-16T15:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.030193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.030252 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.030270 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.030295 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.030316 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:31Z","lastTransitionTime":"2026-02-16T15:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.133581 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.133700 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.134429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.134812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.134898 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:31Z","lastTransitionTime":"2026-02-16T15:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.238631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.238692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.238711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.238736 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.238751 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:31Z","lastTransitionTime":"2026-02-16T15:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.341611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.341694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.341709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.341731 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.341747 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:31Z","lastTransitionTime":"2026-02-16T15:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.347408 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 17:34:00.76112833 +0000 UTC Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.377791 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:31 crc kubenswrapper[4835]: E0216 15:08:31.378068 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.377815 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.378178 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:31 crc kubenswrapper[4835]: E0216 15:08:31.378893 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:31 crc kubenswrapper[4835]: E0216 15:08:31.379055 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.379395 4835 scope.go:117] "RemoveContainer" containerID="8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627" Feb 16 15:08:31 crc kubenswrapper[4835]: E0216 15:08:31.379771 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.400765 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87cacd68-0bbf-43de-bae3-e9ed31d19fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e174cfe63ecdfdaaa7051f8af8164e00f8295e42caf803bfe07fe758999af296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f206bec33670fb3e912d933cf602a51c92b99fba2802d3c1fe79b1cd920c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b486eb5e8108cd7a9fb09f21e0bb25f8483521b95acbbb42bbb1b7078fc8c030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.411795 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.428477 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.444999 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.445034 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.445044 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.445061 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.445073 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:31Z","lastTransitionTime":"2026-02-16T15:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.454282 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:19Z\\\",\\\"message\\\":\\\"{0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:19.463591 6556 services_controller.go:443] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.233\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0216 15:08:19.463602 6556 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0216 15:08:19.463577 6556 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.482485 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.498094 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.513116 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.535168 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.546813 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.547965 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.548100 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.548188 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.548315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.548476 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:31Z","lastTransitionTime":"2026-02-16T15:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.562371 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.578347 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.594556 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.608198 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.624286 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.638584 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.651768 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.652201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.652254 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.652273 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.652300 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.652322 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:31Z","lastTransitionTime":"2026-02-16T15:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.667432 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.689248 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:31Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.755586 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.755656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.755669 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.755688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.755701 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:31Z","lastTransitionTime":"2026-02-16T15:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.859469 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.859582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.859601 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.859631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.859651 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:31Z","lastTransitionTime":"2026-02-16T15:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.963721 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.963790 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.963812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.963840 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:31 crc kubenswrapper[4835]: I0216 15:08:31.963861 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:31Z","lastTransitionTime":"2026-02-16T15:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.067436 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.067512 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.067560 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.067597 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.067630 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:32Z","lastTransitionTime":"2026-02-16T15:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.171168 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.171227 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.171238 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.171256 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.171268 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:32Z","lastTransitionTime":"2026-02-16T15:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.274812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.274885 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.274899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.274922 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.274935 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:32Z","lastTransitionTime":"2026-02-16T15:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.348210 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 03:19:05.219591026 +0000 UTC Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.377652 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:32 crc kubenswrapper[4835]: E0216 15:08:32.377821 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.378208 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.378259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.378270 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.378284 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.378294 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:32Z","lastTransitionTime":"2026-02-16T15:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.481625 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.481699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.481711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.481728 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.481744 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:32Z","lastTransitionTime":"2026-02-16T15:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.585376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.585477 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.585501 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.585585 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.585608 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:32Z","lastTransitionTime":"2026-02-16T15:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.689482 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.689577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.689590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.689614 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.689631 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:32Z","lastTransitionTime":"2026-02-16T15:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.792940 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.793021 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.793038 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.793066 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.793088 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:32Z","lastTransitionTime":"2026-02-16T15:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.896445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.896481 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.896491 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.896504 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:32 crc kubenswrapper[4835]: I0216 15:08:32.896513 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:32Z","lastTransitionTime":"2026-02-16T15:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.000826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.001318 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.001328 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.001346 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.001372 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:33Z","lastTransitionTime":"2026-02-16T15:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.103985 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.104028 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.104040 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.104058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.104070 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:33Z","lastTransitionTime":"2026-02-16T15:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.207252 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.207307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.207319 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.207336 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.207347 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:33Z","lastTransitionTime":"2026-02-16T15:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.310369 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.310419 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.310432 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.310448 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.310460 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:33Z","lastTransitionTime":"2026-02-16T15:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.349185 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:13:20.08880167 +0000 UTC Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.378762 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.378804 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:33 crc kubenswrapper[4835]: E0216 15:08:33.379201 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.379448 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:33 crc kubenswrapper[4835]: E0216 15:08:33.379449 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:33 crc kubenswrapper[4835]: E0216 15:08:33.379799 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.412939 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.412991 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.413007 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.413049 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.413070 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:33Z","lastTransitionTime":"2026-02-16T15:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.516068 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.516115 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.516126 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.516143 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.516155 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:33Z","lastTransitionTime":"2026-02-16T15:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.619124 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.619395 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.619624 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.619974 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.620275 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:33Z","lastTransitionTime":"2026-02-16T15:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.722636 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.722991 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.723152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.723265 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.723361 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:33Z","lastTransitionTime":"2026-02-16T15:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.825724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.826098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.826182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.826260 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.826355 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:33Z","lastTransitionTime":"2026-02-16T15:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.928566 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.928827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.928930 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.929100 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:33 crc kubenswrapper[4835]: I0216 15:08:33.929272 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:33Z","lastTransitionTime":"2026-02-16T15:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.032242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.032542 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.032628 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.032689 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.032751 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:34Z","lastTransitionTime":"2026-02-16T15:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.134966 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.135251 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.135318 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.135381 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.135439 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:34Z","lastTransitionTime":"2026-02-16T15:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.238772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.239310 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.239568 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.239743 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.239922 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:34Z","lastTransitionTime":"2026-02-16T15:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.344298 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.344370 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.344390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.344417 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.344435 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:34Z","lastTransitionTime":"2026-02-16T15:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.350338 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:39:10.498811933 +0000 UTC Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.377803 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:34 crc kubenswrapper[4835]: E0216 15:08:34.378065 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.447574 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.447628 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.447648 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.447673 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.447693 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:34Z","lastTransitionTime":"2026-02-16T15:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.550661 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.550730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.550748 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.550778 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.550799 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:34Z","lastTransitionTime":"2026-02-16T15:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.654863 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.654920 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.654931 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.654950 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.654962 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:34Z","lastTransitionTime":"2026-02-16T15:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.757151 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.757204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.757220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.757245 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.757271 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:34Z","lastTransitionTime":"2026-02-16T15:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.859828 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.860064 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.860148 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.860245 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.860338 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:34Z","lastTransitionTime":"2026-02-16T15:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.962820 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.962866 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.962877 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.962892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:34 crc kubenswrapper[4835]: I0216 15:08:34.962904 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:34Z","lastTransitionTime":"2026-02-16T15:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.065687 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.065730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.065740 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.065756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.065767 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:35Z","lastTransitionTime":"2026-02-16T15:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.168506 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.168561 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.168570 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.168582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.168594 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:35Z","lastTransitionTime":"2026-02-16T15:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.270815 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.270872 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.270889 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.270912 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.270928 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:35Z","lastTransitionTime":"2026-02-16T15:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.351854 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 15:20:17.938778104 +0000 UTC Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.373459 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.373577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.373587 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.373605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.373619 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:35Z","lastTransitionTime":"2026-02-16T15:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.378034 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:35 crc kubenswrapper[4835]: E0216 15:08:35.378217 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.378330 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:35 crc kubenswrapper[4835]: E0216 15:08:35.378412 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.378432 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:35 crc kubenswrapper[4835]: E0216 15:08:35.378509 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.476288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.476332 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.476342 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.476355 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.476369 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:35Z","lastTransitionTime":"2026-02-16T15:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.578411 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.578463 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.578472 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.578488 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.578497 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:35Z","lastTransitionTime":"2026-02-16T15:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.680788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.680834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.680847 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.680862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.680874 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:35Z","lastTransitionTime":"2026-02-16T15:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.784159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.784240 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.784265 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.784298 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.784323 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:35Z","lastTransitionTime":"2026-02-16T15:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.887730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.887797 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.887815 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.887845 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.887862 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:35Z","lastTransitionTime":"2026-02-16T15:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.992250 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.992341 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.992360 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.992427 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:35 crc kubenswrapper[4835]: I0216 15:08:35.992450 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:35Z","lastTransitionTime":"2026-02-16T15:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.096151 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.096225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.096247 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.096280 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.096300 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:36Z","lastTransitionTime":"2026-02-16T15:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.198700 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.198759 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.198776 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.198799 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.198817 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:36Z","lastTransitionTime":"2026-02-16T15:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.301331 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.301365 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.301396 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.301412 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.301424 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:36Z","lastTransitionTime":"2026-02-16T15:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.352505 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:42:19.677281901 +0000 UTC Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.378134 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:36 crc kubenswrapper[4835]: E0216 15:08:36.378283 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.403510 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.403558 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.403570 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.403582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.403591 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:36Z","lastTransitionTime":"2026-02-16T15:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.506601 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.506648 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.506662 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.506679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.506689 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:36Z","lastTransitionTime":"2026-02-16T15:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.608588 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.608611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.608618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.608630 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.608639 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:36Z","lastTransitionTime":"2026-02-16T15:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.711345 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.711387 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.711395 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.711409 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.711420 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:36Z","lastTransitionTime":"2026-02-16T15:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.814499 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.814567 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.814579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.814596 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.814607 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:36Z","lastTransitionTime":"2026-02-16T15:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.917810 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.917870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.917880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.917897 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:36 crc kubenswrapper[4835]: I0216 15:08:36.917910 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:36Z","lastTransitionTime":"2026-02-16T15:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.020142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.020182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.020190 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.020203 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.020212 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:37Z","lastTransitionTime":"2026-02-16T15:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.122851 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.122932 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.122942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.122957 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.122969 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:37Z","lastTransitionTime":"2026-02-16T15:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.226455 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.226507 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.226522 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.226561 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.226574 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:37Z","lastTransitionTime":"2026-02-16T15:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.328884 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.328940 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.328952 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.328970 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.328985 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:37Z","lastTransitionTime":"2026-02-16T15:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.353300 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:40:44.850525124 +0000 UTC Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.378670 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.378670 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:37 crc kubenswrapper[4835]: E0216 15:08:37.378808 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.378692 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:37 crc kubenswrapper[4835]: E0216 15:08:37.378910 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:37 crc kubenswrapper[4835]: E0216 15:08:37.378958 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.431918 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.431970 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.431980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.431996 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.432007 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:37Z","lastTransitionTime":"2026-02-16T15:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.535577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.535696 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.535709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.535725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.535734 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:37Z","lastTransitionTime":"2026-02-16T15:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.638696 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.638733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.638741 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.638755 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.638764 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:37Z","lastTransitionTime":"2026-02-16T15:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.741043 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.741080 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.741090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.741104 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.741113 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:37Z","lastTransitionTime":"2026-02-16T15:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.842703 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.842735 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.842742 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.842757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.842766 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:37Z","lastTransitionTime":"2026-02-16T15:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.944800 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.944837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.944847 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.944862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:37 crc kubenswrapper[4835]: I0216 15:08:37.944872 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:37Z","lastTransitionTime":"2026-02-16T15:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.046873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.046919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.046927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.046945 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.046955 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:38Z","lastTransitionTime":"2026-02-16T15:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.144469 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs\") pod \"network-metrics-daemon-b5nkt\" (UID: \"5121c96d-796f-46b5-8889-b7e74c329b2f\") " pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:38 crc kubenswrapper[4835]: E0216 15:08:38.144687 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 15:08:38 crc kubenswrapper[4835]: E0216 15:08:38.144787 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs podName:5121c96d-796f-46b5-8889-b7e74c329b2f nodeName:}" failed. No retries permitted until 2026-02-16 15:09:10.144767868 +0000 UTC m=+99.436760763 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs") pod "network-metrics-daemon-b5nkt" (UID: "5121c96d-796f-46b5-8889-b7e74c329b2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.151056 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.151114 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.151132 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.151156 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.151173 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:38Z","lastTransitionTime":"2026-02-16T15:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.235829 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.235915 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.235935 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.235965 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.235990 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:38Z","lastTransitionTime":"2026-02-16T15:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:38 crc kubenswrapper[4835]: E0216 15:08:38.258452 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:38Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.264642 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.264757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.264781 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.264816 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.264839 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:38Z","lastTransitionTime":"2026-02-16T15:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:38 crc kubenswrapper[4835]: E0216 15:08:38.283956 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:38Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.299745 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.299799 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.299814 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.299842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.299860 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:38Z","lastTransitionTime":"2026-02-16T15:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:38 crc kubenswrapper[4835]: E0216 15:08:38.316398 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:38Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.322000 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.322058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.322070 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.322087 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.322098 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:38Z","lastTransitionTime":"2026-02-16T15:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:38 crc kubenswrapper[4835]: E0216 15:08:38.337677 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:38Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.341547 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.341579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.341591 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.341602 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.341610 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:38Z","lastTransitionTime":"2026-02-16T15:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.354417 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 17:44:13.659227068 +0000 UTC Feb 16 15:08:38 crc kubenswrapper[4835]: E0216 15:08:38.358255 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:38Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:38 crc kubenswrapper[4835]: E0216 15:08:38.358403 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.360827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.360856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.360867 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.360881 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.360889 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:38Z","lastTransitionTime":"2026-02-16T15:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.378328 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:38 crc kubenswrapper[4835]: E0216 15:08:38.378497 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.464077 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.464140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.464152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.464172 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.464185 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:38Z","lastTransitionTime":"2026-02-16T15:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.566505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.566595 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.566614 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.566638 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.566656 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:38Z","lastTransitionTime":"2026-02-16T15:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.668827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.668867 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.668875 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.668893 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.668905 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:38Z","lastTransitionTime":"2026-02-16T15:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.771240 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.771279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.771289 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.771306 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.771318 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:38Z","lastTransitionTime":"2026-02-16T15:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.872756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.872791 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.872802 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.872820 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.872834 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:38Z","lastTransitionTime":"2026-02-16T15:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.974744 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.974776 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.974787 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.974801 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:38 crc kubenswrapper[4835]: I0216 15:08:38.974812 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:38Z","lastTransitionTime":"2026-02-16T15:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.076944 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.076972 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.076981 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.076994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.077003 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:39Z","lastTransitionTime":"2026-02-16T15:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.179404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.179447 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.179459 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.179475 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.179487 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:39Z","lastTransitionTime":"2026-02-16T15:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.281683 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.281719 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.281729 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.281743 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.281753 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:39Z","lastTransitionTime":"2026-02-16T15:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.355385 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:35:19.176829242 +0000 UTC Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.377839 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.377902 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:39 crc kubenswrapper[4835]: E0216 15:08:39.378015 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:39 crc kubenswrapper[4835]: E0216 15:08:39.378136 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.378191 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:39 crc kubenswrapper[4835]: E0216 15:08:39.378283 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.383612 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.383666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.383684 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.383866 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.383884 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:39Z","lastTransitionTime":"2026-02-16T15:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.486160 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.486221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.486239 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.486263 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.486281 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:39Z","lastTransitionTime":"2026-02-16T15:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.589120 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.589296 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.589316 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.589343 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.589362 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:39Z","lastTransitionTime":"2026-02-16T15:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.692811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.692856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.692864 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.692880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.692891 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:39Z","lastTransitionTime":"2026-02-16T15:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.794912 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.794958 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.794968 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.794984 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.794995 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:39Z","lastTransitionTime":"2026-02-16T15:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.848092 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gncxk_36a4edb0-ce1a-4b59-b1f9-f5b43255de2d/kube-multus/0.log" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.848153 4835 generic.go:334] "Generic (PLEG): container finished" podID="36a4edb0-ce1a-4b59-b1f9-f5b43255de2d" containerID="de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce" exitCode=1 Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.848189 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gncxk" event={"ID":"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d","Type":"ContainerDied","Data":"de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce"} Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.848611 4835 scope.go:117] "RemoveContainer" containerID="de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.869871 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:39Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.883008 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:39Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.896639 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:39Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.897034 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.897132 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.897228 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.897305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.897370 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:39Z","lastTransitionTime":"2026-02-16T15:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.907572 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:39Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.920830 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:39Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.953842 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:39Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.967262 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:39Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.984076 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:39Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.999510 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.999562 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.999573 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.999589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:39 crc kubenswrapper[4835]: I0216 15:08:39.999600 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:39Z","lastTransitionTime":"2026-02-16T15:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:39.999988 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:39Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.013743 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.028362 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.043035 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.056120 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.068846 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:39Z\\\",\\\"message\\\":\\\"2026-02-16T15:07:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9\\\\n2026-02-16T15:07:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9 to /host/opt/cni/bin/\\\\n2026-02-16T15:07:54Z [verbose] multus-daemon started\\\\n2026-02-16T15:07:54Z [verbose] Readiness Indicator file check\\\\n2026-02-16T15:08:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.080425 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.095454 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.101861 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.101892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.101902 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.101916 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.101926 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:40Z","lastTransitionTime":"2026-02-16T15:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.115103 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:19Z\\\",\\\"message\\\":\\\"{0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:19.463591 6556 services_controller.go:443] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.233\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0216 15:08:19.463602 6556 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0216 15:08:19.463577 6556 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.126115 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87cacd68-0bbf-43de-bae3-e9ed31d19fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e174cfe63ecdfdaaa7051f8af8164e00f8295e42caf803bfe07fe758999af296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f206bec33670fb3e912d933cf602a51c92b99fba2802d3c1fe79b1cd920c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b486eb5e8108cd7a9fb09f21e0bb25f8483521b95acbbb42bbb1b7078fc8c030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.204359 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.204391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.204401 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.204415 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.204424 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:40Z","lastTransitionTime":"2026-02-16T15:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.307040 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.307075 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.307088 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.307103 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.307114 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:40Z","lastTransitionTime":"2026-02-16T15:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.355510 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 11:35:22.502074759 +0000 UTC Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.377989 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:40 crc kubenswrapper[4835]: E0216 15:08:40.378089 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.409716 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.409825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.409903 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.409964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.410030 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:40Z","lastTransitionTime":"2026-02-16T15:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.561710 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.561756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.561773 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.561800 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.561817 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:40Z","lastTransitionTime":"2026-02-16T15:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.664106 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.664157 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.664169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.664186 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.664196 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:40Z","lastTransitionTime":"2026-02-16T15:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.766869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.766931 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.766945 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.766969 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.766981 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:40Z","lastTransitionTime":"2026-02-16T15:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.852290 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gncxk_36a4edb0-ce1a-4b59-b1f9-f5b43255de2d/kube-multus/0.log" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.852346 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gncxk" event={"ID":"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d","Type":"ContainerStarted","Data":"7edb148cc65ee2949251ab04a07a1827852b3de552110178d05456e30d5a8d04"} Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.864122 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.869505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.869559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.869568 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.869582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.869592 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:40Z","lastTransitionTime":"2026-02-16T15:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.875404 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.885412 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.894973 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edb148cc65ee2949251ab04a07a1827852b3de552110178d05456e30d5a8d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:39Z\\\",\\\"message\\\":\\\"2026-02-16T15:07:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9\\\\n2026-02-16T15:07:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9 to /host/opt/cni/bin/\\\\n2026-02-16T15:07:54Z [verbose] multus-daemon started\\\\n2026-02-16T15:07:54Z [verbose] Readiness Indicator file check\\\\n2026-02-16T15:08:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.904264 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87cacd68-0bbf-43de-bae3-e9ed31d19fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e174cfe63ecdfdaaa7051f8af8164e00f8295e42caf803bfe07fe758999af296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f206bec33670fb3e912d933cf602a51c92b99fba2802d3c1fe79b1cd920c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b486eb5e8108cd7a9fb09f21e0bb25f8483521b95acbbb42bbb1b7078fc8c030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.913619 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.925646 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.945305 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:19Z\\\",\\\"message\\\":\\\"{0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:19.463591 6556 services_controller.go:443] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.233\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0216 15:08:19.463602 6556 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0216 15:08:19.463577 6556 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.962153 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.971290 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.971314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.971321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.971333 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.971342 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:40Z","lastTransitionTime":"2026-02-16T15:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.973582 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.985437 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:40 crc kubenswrapper[4835]: I0216 15:08:40.995055 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:40Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.003099 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.011142 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.021788 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.031849 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.041583 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.050916 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.079840 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.079880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.079896 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.079912 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.079924 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:41Z","lastTransitionTime":"2026-02-16T15:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.182647 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.182684 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.182693 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.182708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.182717 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:41Z","lastTransitionTime":"2026-02-16T15:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.285449 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.285482 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.285493 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.285508 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.285518 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:41Z","lastTransitionTime":"2026-02-16T15:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.356579 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 20:36:39.332770591 +0000 UTC Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.378431 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:41 crc kubenswrapper[4835]: E0216 15:08:41.378662 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.378877 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:41 crc kubenswrapper[4835]: E0216 15:08:41.379025 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.379035 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:41 crc kubenswrapper[4835]: E0216 15:08:41.379129 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.387552 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.387597 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.387612 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.387636 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.387654 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:41Z","lastTransitionTime":"2026-02-16T15:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.391426 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.410689 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edb148cc65ee2949251ab04a07a1827852b3de552110178d05456e30d5a8d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:39Z\\\",\\\"message\\\":\\\"2026-02-16T15:07:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9\\\\n2026-02-16T15:07:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9 to /host/opt/cni/bin/\\\\n2026-02-16T15:07:54Z [verbose] multus-daemon started\\\\n2026-02-16T15:07:54Z [verbose] Readiness Indicator file check\\\\n2026-02-16T15:08:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.423328 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.439162 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.459361 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:19Z\\\",\\\"message\\\":\\\"{0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:19.463591 6556 services_controller.go:443] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.233\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0216 15:08:19.463602 6556 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0216 15:08:19.463577 6556 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.473762 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87cacd68-0bbf-43de-bae3-e9ed31d19fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e174cfe63ecdfdaaa7051f8af8164e00f8295e42caf803bfe07fe758999af296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f206bec33670fb3e912d933cf602a51c92b99fba2802d3c1fe79b1cd920c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b486eb5e8108cd7a9fb09f21e0bb25f8483521b95acbbb42bbb1b7078fc8c030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.485963 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.490008 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.490082 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.490101 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.490136 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.490160 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:41Z","lastTransitionTime":"2026-02-16T15:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.505458 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.518602 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.530399 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.540635 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.570285 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.587732 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.592255 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.592330 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.592352 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.592379 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.592398 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:41Z","lastTransitionTime":"2026-02-16T15:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.609194 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.621848 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.636063 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.648502 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.663151 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:41Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.694995 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.695311 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.695399 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.695489 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.695601 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:41Z","lastTransitionTime":"2026-02-16T15:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.797657 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.797942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.798010 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.798086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.798152 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:41Z","lastTransitionTime":"2026-02-16T15:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.900454 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.900500 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.900515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.900548 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:41 crc kubenswrapper[4835]: I0216 15:08:41.900561 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:41Z","lastTransitionTime":"2026-02-16T15:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.003462 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.003507 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.003516 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.003550 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.003563 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:42Z","lastTransitionTime":"2026-02-16T15:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.106664 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.106716 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.106729 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.106748 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.106759 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:42Z","lastTransitionTime":"2026-02-16T15:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.210564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.210620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.210640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.210673 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.210695 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:42Z","lastTransitionTime":"2026-02-16T15:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.313797 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.313858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.313868 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.313889 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.313898 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:42Z","lastTransitionTime":"2026-02-16T15:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.357354 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:01:21.260260656 +0000 UTC Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.378326 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:42 crc kubenswrapper[4835]: E0216 15:08:42.378654 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.416770 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.416838 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.416849 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.416871 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.416886 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:42Z","lastTransitionTime":"2026-02-16T15:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.519700 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.519747 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.519761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.519781 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.519795 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:42Z","lastTransitionTime":"2026-02-16T15:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.622338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.622405 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.622423 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.622445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.622463 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:42Z","lastTransitionTime":"2026-02-16T15:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.725488 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.725548 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.725559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.725576 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.725586 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:42Z","lastTransitionTime":"2026-02-16T15:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.828200 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.828267 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.828284 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.828310 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.828331 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:42Z","lastTransitionTime":"2026-02-16T15:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.931212 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.931253 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.931262 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.931277 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:42 crc kubenswrapper[4835]: I0216 15:08:42.931286 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:42Z","lastTransitionTime":"2026-02-16T15:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.033232 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.033264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.033271 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.033284 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.033293 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:43Z","lastTransitionTime":"2026-02-16T15:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.135560 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.135606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.135616 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.135631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.135643 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:43Z","lastTransitionTime":"2026-02-16T15:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.238168 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.238230 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.238249 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.238272 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.238290 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:43Z","lastTransitionTime":"2026-02-16T15:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.340172 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.340202 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.340211 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.340225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.340235 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:43Z","lastTransitionTime":"2026-02-16T15:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.358560 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:32:27.531623059 +0000 UTC Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.378705 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.378774 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.378801 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:43 crc kubenswrapper[4835]: E0216 15:08:43.378929 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:43 crc kubenswrapper[4835]: E0216 15:08:43.379028 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:43 crc kubenswrapper[4835]: E0216 15:08:43.379156 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.442663 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.442703 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.442711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.442727 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.442739 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:43Z","lastTransitionTime":"2026-02-16T15:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.545512 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.545615 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.545632 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.545656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.545675 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:43Z","lastTransitionTime":"2026-02-16T15:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.648473 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.648553 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.648572 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.648595 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.648612 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:43Z","lastTransitionTime":"2026-02-16T15:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.751113 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.751150 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.751162 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.751180 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.751194 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:43Z","lastTransitionTime":"2026-02-16T15:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.854008 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.854063 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.854078 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.854098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.854110 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:43Z","lastTransitionTime":"2026-02-16T15:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.956205 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.956249 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.956263 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.956283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:43 crc kubenswrapper[4835]: I0216 15:08:43.956295 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:43Z","lastTransitionTime":"2026-02-16T15:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.058364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.058396 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.058405 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.058437 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.058446 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:44Z","lastTransitionTime":"2026-02-16T15:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.160465 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.160502 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.160564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.160585 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.160599 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:44Z","lastTransitionTime":"2026-02-16T15:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.262670 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.262707 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.262718 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.262732 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.262741 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:44Z","lastTransitionTime":"2026-02-16T15:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.359569 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 05:47:22.20773351 +0000 UTC Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.365505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.365539 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.365547 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.365560 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.365569 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:44Z","lastTransitionTime":"2026-02-16T15:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.377865 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:44 crc kubenswrapper[4835]: E0216 15:08:44.378271 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.378453 4835 scope.go:117] "RemoveContainer" containerID="8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.468116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.468161 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.468178 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.468204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.468225 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:44Z","lastTransitionTime":"2026-02-16T15:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.571192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.571259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.571289 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.571333 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.571353 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:44Z","lastTransitionTime":"2026-02-16T15:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.673016 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.673048 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.673056 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.673070 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.673079 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:44Z","lastTransitionTime":"2026-02-16T15:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.775833 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.775881 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.775892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.775911 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.775923 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:44Z","lastTransitionTime":"2026-02-16T15:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.864031 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovnkube-controller/2.log" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.866208 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerStarted","Data":"fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1"} Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.869728 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.877782 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.877826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.877837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.877856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.877868 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:44Z","lastTransitionTime":"2026-02-16T15:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.887306 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87cacd68-0bbf-43de-bae3-e9ed31d19fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e174cfe63ecdfdaaa7051f8af8164e00f8295e42caf803bfe07fe758999af296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f206bec33670fb3e912d933cf602a51c92b99fba2802d3c1fe79b1cd920c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b486eb5e8108cd7a9fb09f21e0bb25f8483521b95acbbb42bbb1b7078fc8c030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:44Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.897863 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:44Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.913885 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:44Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.930358 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:19Z\\\",\\\"message\\\":\\\"{0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:19.463591 6556 services_controller.go:443] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.233\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0216 15:08:19.463602 6556 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0216 15:08:19.463577 6556 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:44Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.939291 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:44Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.957991 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:44Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.968340 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:44Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.979728 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:44Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.979940 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.979959 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.979967 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.979979 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.979989 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:44Z","lastTransitionTime":"2026-02-16T15:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.989507 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:44Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:44 crc kubenswrapper[4835]: I0216 15:08:44.996578 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:44Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.006426 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:45Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.014873 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:45Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.024614 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:45Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.036456 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:45Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.048273 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:45Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.063358 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:45Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.074685 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:45Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.083198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.083233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.083242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.083290 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.083303 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:45Z","lastTransitionTime":"2026-02-16T15:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.088666 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edb148cc65ee2949251ab04a07a1827852b3de552110178d05456e30d5a8d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:39Z\\\",\\\"message\\\":\\\"2026-02-16T15:07:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9\\\\n2026-02-16T15:07:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9 to /host/opt/cni/bin/\\\\n2026-02-16T15:07:54Z [verbose] multus-daemon started\\\\n2026-02-16T15:07:54Z [verbose] Readiness Indicator file check\\\\n2026-02-16T15:08:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:45Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.185775 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.185807 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.185816 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.185832 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.185841 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:45Z","lastTransitionTime":"2026-02-16T15:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.288985 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.289059 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.289072 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.289094 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.289108 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:45Z","lastTransitionTime":"2026-02-16T15:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.359974 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 15:34:19.756336488 +0000 UTC Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.378328 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.378379 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.378489 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:45 crc kubenswrapper[4835]: E0216 15:08:45.378546 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:45 crc kubenswrapper[4835]: E0216 15:08:45.378606 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:45 crc kubenswrapper[4835]: E0216 15:08:45.378751 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.391689 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.392305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.392341 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.392354 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.392367 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.392378 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:45Z","lastTransitionTime":"2026-02-16T15:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.494308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.494347 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.494358 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.494372 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.494381 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:45Z","lastTransitionTime":"2026-02-16T15:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.596556 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.596591 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.596601 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.596616 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.596625 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:45Z","lastTransitionTime":"2026-02-16T15:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.700166 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.700205 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.700218 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.700238 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.700252 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:45Z","lastTransitionTime":"2026-02-16T15:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.803260 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.803291 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.803299 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.803313 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.803323 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:45Z","lastTransitionTime":"2026-02-16T15:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.874734 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovnkube-controller/3.log" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.876213 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovnkube-controller/2.log" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.879257 4835 generic.go:334] "Generic (PLEG): container finished" podID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerID="fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1" exitCode=1 Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.879317 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerDied","Data":"fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1"} Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.879364 4835 scope.go:117] "RemoveContainer" containerID="8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.881685 4835 scope.go:117] "RemoveContainer" containerID="fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1" Feb 16 15:08:45 crc kubenswrapper[4835]: E0216 15:08:45.883153 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.898889 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87cacd68-0bbf-43de-bae3-e9ed31d19fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e174cfe63ecdfdaaa7051f8af8164e00f8295e42caf803bfe07fe758999af296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f206bec33670fb3e912d933cf602a51c92b99fba2802d3c1fe79b1cd920c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b486eb5e8108cd7a9fb09f21e0bb25f8483521b95acbbb42bbb1b7078fc8c030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:45Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.905618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.905655 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.905668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.905689 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.905701 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:45Z","lastTransitionTime":"2026-02-16T15:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.911253 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:45Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.924850 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:45Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.944706 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f05bbd2256cef4f70f8de4ebc98ad1e90f72a289ce3215c24939a4555427627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:19Z\\\",\\\"message\\\":\\\"{0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0216 15:08:19.463591 6556 services_controller.go:443] Built service openshift-kube-scheduler-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.233\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0216 15:08:19.463602 6556 services_controller.go:444] Built service openshift-kube-scheduler-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0216 15:08:19.463577 6556 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.o\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:45Z\\\",\\\"message\\\":\\\"c\\\\nI0216 15:08:45.320856 6957 pods.go:252] [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] addLogicalPort took 995.288µs, libovsdb time 502.214µs\\\\nI0216 15:08:45.320864 6957 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g after 0 failed attempt(s)\\\\nI0216 15:08:45.320869 6957 default_network_controller.go:776] Recording success event on pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 15:08:45.320884 6957 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" but failed to find it\\\\nF0216 15:08:45.320889 6957 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:45Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.972670 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:45Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:45 crc kubenswrapper[4835]: I0216 15:08:45.986279 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:45Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.002620 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.010864 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.010914 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.010925 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.010943 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.010958 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:46Z","lastTransitionTime":"2026-02-16T15:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.019365 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.029006 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.038105 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.048684 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.061281 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.072198 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.084014 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.093049 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b89273a-90a8-45ed-9ec1-6add78232d92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa003919dff9e8b7f05d24f459d9bccc04359e6db580f3bb4a311fefc6b515dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b01f2bacc518a9e185c169def3bc6764e1ec4512af82519ea94a2f41f5cc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b01f2bacc518a9e185c169def3bc6764e1ec4512af82519ea94a2f41f5cc5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.105866 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.113814 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.113883 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.113900 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.113928 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.113946 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:46Z","lastTransitionTime":"2026-02-16T15:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.122116 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.132951 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.145795 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edb148cc65ee2949251ab04a07a1827852b3de552110178d05456e30d5a8d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:39Z\\\",\\\"message\\\":\\\"2026-02-16T15:07:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9\\\\n2026-02-16T15:07:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9 to /host/opt/cni/bin/\\\\n2026-02-16T15:07:54Z [verbose] multus-daemon started\\\\n2026-02-16T15:07:54Z [verbose] Readiness Indicator file check\\\\n2026-02-16T15:08:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.217662 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.217721 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.217734 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.217754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.217767 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:46Z","lastTransitionTime":"2026-02-16T15:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.322335 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.322418 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.322445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.322479 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.322507 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:46Z","lastTransitionTime":"2026-02-16T15:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.361077 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:39:37.29416271 +0000 UTC Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.377884 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:46 crc kubenswrapper[4835]: E0216 15:08:46.378035 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.427185 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.427266 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.427289 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.427324 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.427346 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:46Z","lastTransitionTime":"2026-02-16T15:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.530364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.530412 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.530422 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.530438 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.530450 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:46Z","lastTransitionTime":"2026-02-16T15:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.632916 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.633000 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.633150 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.633183 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.633201 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:46Z","lastTransitionTime":"2026-02-16T15:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.735500 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.735575 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.735591 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.735610 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.735626 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:46Z","lastTransitionTime":"2026-02-16T15:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.838002 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.838041 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.838050 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.838065 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.838074 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:46Z","lastTransitionTime":"2026-02-16T15:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.885138 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovnkube-controller/3.log" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.888834 4835 scope.go:117] "RemoveContainer" containerID="fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1" Feb 16 15:08:46 crc kubenswrapper[4835]: E0216 15:08:46.889000 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.909135 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.925086 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.940607 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.940685 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.940703 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.940728 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.940748 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:46Z","lastTransitionTime":"2026-02-16T15:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.945496 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.965421 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:46 crc kubenswrapper[4835]: I0216 15:08:46.982100 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b89273a-90a8-45ed-9ec1-6add78232d92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa003919dff9e8b7f05d24f459d9bccc04359e6db580f3bb4a311fefc6b515dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b01f2bacc518a9e185c169def3bc6764e1ec4512af82519ea94a2f41f5cc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b01f2bacc518a9e185c169def3bc6764e1ec4512af82519ea94a2f41f5cc5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:46Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.017488 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:47Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.042823 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.042862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.042874 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.042890 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.042902 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:47Z","lastTransitionTime":"2026-02-16T15:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.045685 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:47Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.056767 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:47Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.070065 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edb148cc65ee2949251ab04a07a1827852b3de552110178d05456e30d5a8d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:39Z\\\",\\\"message\\\":\\\"2026-02-16T15:07:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9\\\\n2026-02-16T15:07:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9 to /host/opt/cni/bin/\\\\n2026-02-16T15:07:54Z [verbose] multus-daemon started\\\\n2026-02-16T15:07:54Z [verbose] Readiness Indicator file check\\\\n2026-02-16T15:08:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:47Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.081099 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87cacd68-0bbf-43de-bae3-e9ed31d19fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e174cfe63ecdfdaaa7051f8af8164e00f8295e42caf803bfe07fe758999af296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f206bec33670fb3e912d933cf602a51c92b99fba2802d3c1fe79b1cd920c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b486eb5e8108cd7a9fb09f21e0bb25f8483521b95acbbb42bbb1b7078fc8c030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:47Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.092284 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:47Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.106904 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:47Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.126362 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:45Z\\\",\\\"message\\\":\\\"c\\\\nI0216 15:08:45.320856 6957 pods.go:252] [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] addLogicalPort took 995.288µs, libovsdb time 502.214µs\\\\nI0216 15:08:45.320864 6957 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g after 0 failed attempt(s)\\\\nI0216 15:08:45.320869 6957 default_network_controller.go:776] Recording success event on pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 15:08:45.320884 6957 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" but failed to find it\\\\nF0216 15:08:45.320889 6957 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:47Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.145305 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:47Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.145758 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.145790 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.145801 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.145815 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.145825 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:47Z","lastTransitionTime":"2026-02-16T15:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.158705 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:47Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.174796 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:47Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.187816 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:47Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.197360 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:47Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.209611 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:47Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.247374 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.247421 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.247432 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.247449 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.247461 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:47Z","lastTransitionTime":"2026-02-16T15:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.350557 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.350606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.350614 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.350629 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.350637 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:47Z","lastTransitionTime":"2026-02-16T15:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.361838 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 08:10:39.864590099 +0000 UTC Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.378517 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.378586 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.378564 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:47 crc kubenswrapper[4835]: E0216 15:08:47.378743 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:47 crc kubenswrapper[4835]: E0216 15:08:47.378893 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:47 crc kubenswrapper[4835]: E0216 15:08:47.378973 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.452982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.453042 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.453059 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.453084 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.453102 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:47Z","lastTransitionTime":"2026-02-16T15:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.555231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.555267 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.555276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.555291 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.555300 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:47Z","lastTransitionTime":"2026-02-16T15:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.658267 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.658313 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.658321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.658336 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.658347 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:47Z","lastTransitionTime":"2026-02-16T15:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.761269 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.761347 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.761370 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.761400 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.761422 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:47Z","lastTransitionTime":"2026-02-16T15:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.864885 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.864952 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.864967 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.864988 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.865007 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:47Z","lastTransitionTime":"2026-02-16T15:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.968075 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.968493 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.968765 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.969027 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:47 crc kubenswrapper[4835]: I0216 15:08:47.969260 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:47Z","lastTransitionTime":"2026-02-16T15:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.072587 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.072617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.072627 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.072640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.072648 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:48Z","lastTransitionTime":"2026-02-16T15:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.175168 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.175236 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.175245 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.175261 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.175272 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:48Z","lastTransitionTime":"2026-02-16T15:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.278429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.278480 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.278496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.278514 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.278554 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:48Z","lastTransitionTime":"2026-02-16T15:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.362734 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:07:26.333436191 +0000 UTC Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.378042 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:48 crc kubenswrapper[4835]: E0216 15:08:48.378172 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.381145 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.381174 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.381185 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.381200 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.381212 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:48Z","lastTransitionTime":"2026-02-16T15:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.433442 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.433512 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.433578 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.433611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.433633 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:48Z","lastTransitionTime":"2026-02-16T15:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:48 crc kubenswrapper[4835]: E0216 15:08:48.454674 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:48Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.460256 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.460300 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.460314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.460335 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.460349 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:48Z","lastTransitionTime":"2026-02-16T15:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:48 crc kubenswrapper[4835]: E0216 15:08:48.475080 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:48Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.479765 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.479811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.479824 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.479844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.479860 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:48Z","lastTransitionTime":"2026-02-16T15:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:48 crc kubenswrapper[4835]: E0216 15:08:48.493985 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:48Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.497920 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.497951 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.497961 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.497977 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.497987 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:48Z","lastTransitionTime":"2026-02-16T15:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:48 crc kubenswrapper[4835]: E0216 15:08:48.516388 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:48Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.519915 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.519944 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.519954 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.519968 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.519980 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:48Z","lastTransitionTime":"2026-02-16T15:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:48 crc kubenswrapper[4835]: E0216 15:08:48.537274 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:48Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:48 crc kubenswrapper[4835]: E0216 15:08:48.537416 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.539089 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.539153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.539170 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.539196 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.539216 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:48Z","lastTransitionTime":"2026-02-16T15:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.642335 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.642391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.642408 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.642441 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.642476 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:48Z","lastTransitionTime":"2026-02-16T15:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.745860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.745926 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.745948 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.745975 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.745998 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:48Z","lastTransitionTime":"2026-02-16T15:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.848941 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.849016 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.849039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.849067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.849089 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:48Z","lastTransitionTime":"2026-02-16T15:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.952202 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.952261 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.952272 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.952305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:48 crc kubenswrapper[4835]: I0216 15:08:48.952321 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:48Z","lastTransitionTime":"2026-02-16T15:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.056739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.056810 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.056861 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.056892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.056918 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:49Z","lastTransitionTime":"2026-02-16T15:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.160153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.160847 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.160896 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.160927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.160947 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:49Z","lastTransitionTime":"2026-02-16T15:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.264290 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.264363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.264450 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.264482 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.264566 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:49Z","lastTransitionTime":"2026-02-16T15:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.363301 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 06:55:14.106749463 +0000 UTC Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.368158 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.368217 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.368237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.368261 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.368278 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:49Z","lastTransitionTime":"2026-02-16T15:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.377678 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.377722 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:49 crc kubenswrapper[4835]: E0216 15:08:49.377840 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.377865 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:49 crc kubenswrapper[4835]: E0216 15:08:49.378069 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:49 crc kubenswrapper[4835]: E0216 15:08:49.378294 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.471281 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.471344 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.471363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.471388 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.471406 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:49Z","lastTransitionTime":"2026-02-16T15:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.575083 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.575147 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.575165 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.575189 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.575208 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:49Z","lastTransitionTime":"2026-02-16T15:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.677675 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.677720 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.677737 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.677754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.677766 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:49Z","lastTransitionTime":"2026-02-16T15:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.780323 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.780391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.780410 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.780435 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.780456 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:49Z","lastTransitionTime":"2026-02-16T15:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.883511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.883603 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.883622 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.883645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.883661 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:49Z","lastTransitionTime":"2026-02-16T15:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.986601 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.986693 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.986793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.986824 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:49 crc kubenswrapper[4835]: I0216 15:08:49.986844 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:49Z","lastTransitionTime":"2026-02-16T15:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.090060 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.090107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.090129 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.090152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.090165 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:50Z","lastTransitionTime":"2026-02-16T15:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.192570 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.192608 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.192617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.192634 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.192643 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:50Z","lastTransitionTime":"2026-02-16T15:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.296526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.296574 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.296585 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.296598 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.296608 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:50Z","lastTransitionTime":"2026-02-16T15:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.363469 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:46:42.281880236 +0000 UTC Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.377903 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:50 crc kubenswrapper[4835]: E0216 15:08:50.378011 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.398518 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.398581 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.398592 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.398608 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.398620 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:50Z","lastTransitionTime":"2026-02-16T15:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.501170 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.501254 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.501283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.501315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.501339 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:50Z","lastTransitionTime":"2026-02-16T15:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.604856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.604885 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.604896 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.604912 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.604923 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:50Z","lastTransitionTime":"2026-02-16T15:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.707806 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.707910 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.707935 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.707957 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.707972 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:50Z","lastTransitionTime":"2026-02-16T15:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.810437 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.810468 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.810476 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.810487 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.810495 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:50Z","lastTransitionTime":"2026-02-16T15:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.914160 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.914238 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.914258 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.914290 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:50 crc kubenswrapper[4835]: I0216 15:08:50.914310 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:50Z","lastTransitionTime":"2026-02-16T15:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.017633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.017684 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.017714 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.017730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.017741 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:51Z","lastTransitionTime":"2026-02-16T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.120260 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.120308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.120320 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.120337 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.120348 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:51Z","lastTransitionTime":"2026-02-16T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.222182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.222278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.222290 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.222307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.222319 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:51Z","lastTransitionTime":"2026-02-16T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.325042 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.325209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.325247 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.325338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.325402 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:51Z","lastTransitionTime":"2026-02-16T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.364809 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:56:54.171021468 +0000 UTC Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.378229 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.378300 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:51 crc kubenswrapper[4835]: E0216 15:08:51.378382 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:51 crc kubenswrapper[4835]: E0216 15:08:51.378540 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.378557 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:51 crc kubenswrapper[4835]: E0216 15:08:51.378777 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.412868 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.425145 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.428440 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.428480 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.428489 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.428504 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.428514 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:51Z","lastTransitionTime":"2026-02-16T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.438298 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.450282 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.460249 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.470612 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.483867 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.497744 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.510806 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.522703 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.531086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.531122 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.531131 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.531147 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.531158 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:51Z","lastTransitionTime":"2026-02-16T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.531314 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b89273a-90a8-45ed-9ec1-6add78232d92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa003919dff9e8b7f05d24f459d9bccc04359e6db580f3bb4a311fefc6b515dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b01f2bacc518a9e185c169def3bc6764e1ec4512af82519ea94a2f41f5cc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b01f2bacc518a9e185c169def3bc6764e1ec4512af82519ea94a2f41f5cc5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.540872 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.551004 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.558899 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.571090 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edb148cc65ee2949251ab04a07a1827852b3de552110178d05456e30d5a8d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:39Z\\\",\\\"message\\\":\\\"2026-02-16T15:07:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9\\\\n2026-02-16T15:07:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9 to /host/opt/cni/bin/\\\\n2026-02-16T15:07:54Z [verbose] multus-daemon started\\\\n2026-02-16T15:07:54Z [verbose] Readiness Indicator file check\\\\n2026-02-16T15:08:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.583039 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87cacd68-0bbf-43de-bae3-e9ed31d19fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e174cfe63ecdfdaaa7051f8af8164e00f8295e42caf803bfe07fe758999af296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f206bec33670fb3e912d933cf602a51c92b99fba2802d3c1fe79b1cd920c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b486eb5e8108cd7a9fb09f21e0bb25f8483521b95acbbb42bbb1b7078fc8c030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.591201 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.603570 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.625008 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:45Z\\\",\\\"message\\\":\\\"c\\\\nI0216 15:08:45.320856 6957 pods.go:252] [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] addLogicalPort took 995.288µs, libovsdb time 502.214µs\\\\nI0216 15:08:45.320864 6957 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g after 0 failed attempt(s)\\\\nI0216 15:08:45.320869 6957 default_network_controller.go:776] Recording success event on pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 15:08:45.320884 6957 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" but failed to find it\\\\nF0216 15:08:45.320889 6957 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:51Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.634018 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.634070 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.634085 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.634107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.634123 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:51Z","lastTransitionTime":"2026-02-16T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.736950 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.737004 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.737015 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.737029 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.737039 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:51Z","lastTransitionTime":"2026-02-16T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.839419 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.839461 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.839473 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.839490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.839503 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:51Z","lastTransitionTime":"2026-02-16T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.941299 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.941677 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.941815 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.941912 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:51 crc kubenswrapper[4835]: I0216 15:08:51.942004 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:51Z","lastTransitionTime":"2026-02-16T15:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.044370 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.044698 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.044856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.045011 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.045392 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:52Z","lastTransitionTime":"2026-02-16T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.147384 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.147847 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.147943 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.148050 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.148267 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:52Z","lastTransitionTime":"2026-02-16T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.250490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.250557 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.250568 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.250583 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.250592 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:52Z","lastTransitionTime":"2026-02-16T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.352838 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.352864 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.352873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.352886 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.352894 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:52Z","lastTransitionTime":"2026-02-16T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.365644 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 00:39:17.758828027 +0000 UTC Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.377937 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:52 crc kubenswrapper[4835]: E0216 15:08:52.378309 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.455118 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.455163 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.455172 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.455186 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.455197 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:52Z","lastTransitionTime":"2026-02-16T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.557645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.557680 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.557688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.557700 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.557708 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:52Z","lastTransitionTime":"2026-02-16T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.660282 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.660344 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.660363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.660382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.660396 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:52Z","lastTransitionTime":"2026-02-16T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.762437 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.762513 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.762586 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.762620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.762642 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:52Z","lastTransitionTime":"2026-02-16T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.864913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.864954 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.864973 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.864989 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.864999 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:52Z","lastTransitionTime":"2026-02-16T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.968696 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.968757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.968774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.968797 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:52 crc kubenswrapper[4835]: I0216 15:08:52.968816 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:52Z","lastTransitionTime":"2026-02-16T15:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.072078 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.072138 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.072155 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.072182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.072201 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:53Z","lastTransitionTime":"2026-02-16T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.175702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.176114 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.176305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.176465 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.176659 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:53Z","lastTransitionTime":"2026-02-16T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.280091 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.280164 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.280189 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.280221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.280244 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:53Z","lastTransitionTime":"2026-02-16T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.366566 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 02:03:26.786662406 +0000 UTC Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.378266 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.378385 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:53 crc kubenswrapper[4835]: E0216 15:08:53.378456 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.378266 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:53 crc kubenswrapper[4835]: E0216 15:08:53.378606 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:53 crc kubenswrapper[4835]: E0216 15:08:53.378694 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.384251 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.384311 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.384328 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.384349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.384367 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:53Z","lastTransitionTime":"2026-02-16T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.488193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.488275 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.488485 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.488582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.488612 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:53Z","lastTransitionTime":"2026-02-16T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.592023 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.592076 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.592094 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.592115 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.592170 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:53Z","lastTransitionTime":"2026-02-16T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.695988 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.696069 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.696092 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.696121 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.696139 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:53Z","lastTransitionTime":"2026-02-16T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.799338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.799705 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.799716 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.799732 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.799745 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:53Z","lastTransitionTime":"2026-02-16T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.902480 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.902592 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.902619 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.902648 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:53 crc kubenswrapper[4835]: I0216 15:08:53.902664 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:53Z","lastTransitionTime":"2026-02-16T15:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.006250 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.006317 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.006345 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.006381 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.006409 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:54Z","lastTransitionTime":"2026-02-16T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.109309 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.109386 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.109409 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.109439 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.109461 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:54Z","lastTransitionTime":"2026-02-16T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.213135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.213208 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.213228 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.213258 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.213280 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:54Z","lastTransitionTime":"2026-02-16T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.316485 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.316591 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.316615 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.316647 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.316672 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:54Z","lastTransitionTime":"2026-02-16T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.367286 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:43:25.984944868 +0000 UTC Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.378702 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:54 crc kubenswrapper[4835]: E0216 15:08:54.378827 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.419278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.419338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.419356 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.419381 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.419399 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:54Z","lastTransitionTime":"2026-02-16T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.521719 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.521765 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.521776 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.521794 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.521805 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:54Z","lastTransitionTime":"2026-02-16T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.625251 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.625321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.625343 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.625374 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.625395 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:54Z","lastTransitionTime":"2026-02-16T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.727918 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.727977 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.728001 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.728027 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.728046 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:54Z","lastTransitionTime":"2026-02-16T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.830698 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.830769 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.830791 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.830820 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.830842 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:54Z","lastTransitionTime":"2026-02-16T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.934113 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.934176 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.934193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.934239 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:54 crc kubenswrapper[4835]: I0216 15:08:54.934259 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:54Z","lastTransitionTime":"2026-02-16T15:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.037004 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.037071 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.037093 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.037121 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.037146 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:55Z","lastTransitionTime":"2026-02-16T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.139350 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.139399 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.139415 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.139441 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.139457 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:55Z","lastTransitionTime":"2026-02-16T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.232771 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:08:55 crc kubenswrapper[4835]: E0216 15:08:55.232958 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.2329211 +0000 UTC m=+148.524914035 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.242522 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.242601 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.242619 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.242642 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.242664 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:55Z","lastTransitionTime":"2026-02-16T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.334105 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.334200 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.334259 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.334313 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:55 crc kubenswrapper[4835]: E0216 15:08:55.334383 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 15:08:55 crc kubenswrapper[4835]: E0216 15:08:55.334453 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 15:08:55 crc kubenswrapper[4835]: E0216 15:08:55.334505 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 15:08:55 crc kubenswrapper[4835]: E0216 15:08:55.334594 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 15:08:55 crc kubenswrapper[4835]: E0216 15:08:55.334622 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:08:55 crc kubenswrapper[4835]: E0216 15:08:55.334505 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.334474536 +0000 UTC m=+148.626467501 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 15:08:55 crc kubenswrapper[4835]: E0216 15:08:55.334798 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.334737753 +0000 UTC m=+148.626730688 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 15:08:55 crc kubenswrapper[4835]: E0216 15:08:55.334825 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 15:08:55 crc kubenswrapper[4835]: E0216 15:08:55.334840 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.334826046 +0000 UTC m=+148.626818981 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:08:55 crc kubenswrapper[4835]: E0216 15:08:55.334856 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 15:08:55 crc kubenswrapper[4835]: E0216 15:08:55.334878 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:08:55 crc kubenswrapper[4835]: E0216 15:08:55.334958 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.334934519 +0000 UTC m=+148.626927494 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.345719 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.345760 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.345775 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.345796 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.345813 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:55Z","lastTransitionTime":"2026-02-16T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.368484 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 08:48:41.366203806 +0000 UTC Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.377935 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:55 crc kubenswrapper[4835]: E0216 15:08:55.378136 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.378413 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.378482 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:55 crc kubenswrapper[4835]: E0216 15:08:55.378688 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:55 crc kubenswrapper[4835]: E0216 15:08:55.378774 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.448794 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.448866 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.448891 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.448923 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.448944 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:55Z","lastTransitionTime":"2026-02-16T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.552328 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.552388 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.552406 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.552432 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.552450 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:55Z","lastTransitionTime":"2026-02-16T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.655589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.655669 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.655710 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.655741 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.655762 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:55Z","lastTransitionTime":"2026-02-16T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.758669 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.758725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.758743 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.758768 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.758787 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:55Z","lastTransitionTime":"2026-02-16T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.862096 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.862154 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.862171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.862196 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.862214 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:55Z","lastTransitionTime":"2026-02-16T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.965514 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.965637 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.965658 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.965687 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:55 crc kubenswrapper[4835]: I0216 15:08:55.965708 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:55Z","lastTransitionTime":"2026-02-16T15:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.068845 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.068906 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.068931 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.068961 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.068983 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:56Z","lastTransitionTime":"2026-02-16T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.172060 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.172089 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.172098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.172111 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.172120 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:56Z","lastTransitionTime":"2026-02-16T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.274206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.274266 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.274279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.274312 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.274326 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:56Z","lastTransitionTime":"2026-02-16T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.369469 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:26:01.883940805 +0000 UTC Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.376920 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.377112 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.377207 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.377302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.377395 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:56Z","lastTransitionTime":"2026-02-16T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.377645 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:56 crc kubenswrapper[4835]: E0216 15:08:56.377848 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.480274 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.480346 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.480366 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.480398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.480422 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:56Z","lastTransitionTime":"2026-02-16T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.583291 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.583319 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.583327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.583338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.583346 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:56Z","lastTransitionTime":"2026-02-16T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.686467 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.686525 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.686577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.686601 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.686621 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:56Z","lastTransitionTime":"2026-02-16T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.789229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.789628 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.789780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.790179 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.790430 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:56Z","lastTransitionTime":"2026-02-16T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.893237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.893304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.893321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.893346 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.893364 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:56Z","lastTransitionTime":"2026-02-16T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.996301 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.996353 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.996372 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.996396 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:56 crc kubenswrapper[4835]: I0216 15:08:56.996414 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:56Z","lastTransitionTime":"2026-02-16T15:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.100117 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.100510 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.100794 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.101042 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.101253 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:57Z","lastTransitionTime":"2026-02-16T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.204825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.205149 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.205281 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.205406 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.205585 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:57Z","lastTransitionTime":"2026-02-16T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.309767 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.309836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.309857 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.309883 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.309901 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:57Z","lastTransitionTime":"2026-02-16T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.370385 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:44:31.276661074 +0000 UTC Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.377797 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.377858 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.377890 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:57 crc kubenswrapper[4835]: E0216 15:08:57.377985 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:57 crc kubenswrapper[4835]: E0216 15:08:57.378138 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:57 crc kubenswrapper[4835]: E0216 15:08:57.378629 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.413018 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.413077 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.413093 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.413116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.413135 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:57Z","lastTransitionTime":"2026-02-16T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.516329 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.516609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.516626 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.516651 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.516668 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:57Z","lastTransitionTime":"2026-02-16T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.619277 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.619336 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.619356 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.619379 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.619397 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:57Z","lastTransitionTime":"2026-02-16T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.722897 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.722962 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.722979 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.723003 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.723025 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:57Z","lastTransitionTime":"2026-02-16T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.825777 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.825836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.825850 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.825869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.825884 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:57Z","lastTransitionTime":"2026-02-16T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.929274 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.929782 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.929806 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.929836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:57 crc kubenswrapper[4835]: I0216 15:08:57.929857 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:57Z","lastTransitionTime":"2026-02-16T15:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.033280 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.033329 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.033345 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.033402 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.033418 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:58Z","lastTransitionTime":"2026-02-16T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.136417 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.136473 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.136484 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.136505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.136519 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:58Z","lastTransitionTime":"2026-02-16T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.239581 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.239695 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.239723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.239789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.239814 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:58Z","lastTransitionTime":"2026-02-16T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.344596 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.344678 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.344702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.344734 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.344757 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:58Z","lastTransitionTime":"2026-02-16T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.371386 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 17:39:47.167704655 +0000 UTC Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.377715 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:08:58 crc kubenswrapper[4835]: E0216 15:08:58.378110 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.449429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.449504 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.449523 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.449585 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.449617 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:58Z","lastTransitionTime":"2026-02-16T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.547398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.547455 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.547472 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.547499 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.547519 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:58Z","lastTransitionTime":"2026-02-16T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:58 crc kubenswrapper[4835]: E0216 15:08:58.578631 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.585395 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.585429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.585440 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.585456 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.585466 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:58Z","lastTransitionTime":"2026-02-16T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:58 crc kubenswrapper[4835]: E0216 15:08:58.603132 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.607543 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.607571 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.607580 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.607597 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.607611 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:58Z","lastTransitionTime":"2026-02-16T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:58 crc kubenswrapper[4835]: E0216 15:08:58.621108 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.625646 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.625676 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.625684 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.625699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.625708 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:58Z","lastTransitionTime":"2026-02-16T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:58 crc kubenswrapper[4835]: E0216 15:08:58.641966 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.648193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.648221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.648229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.648245 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.648254 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:58Z","lastTransitionTime":"2026-02-16T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:58 crc kubenswrapper[4835]: E0216 15:08:58.661368 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:08:58Z is after 2025-08-24T17:21:41Z" Feb 16 15:08:58 crc kubenswrapper[4835]: E0216 15:08:58.661472 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.662832 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.662870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.662882 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.662899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.662911 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:58Z","lastTransitionTime":"2026-02-16T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.765592 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.765642 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.765653 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.765671 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.765684 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:58Z","lastTransitionTime":"2026-02-16T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.868349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.868407 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.868424 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.868446 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.868463 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:58Z","lastTransitionTime":"2026-02-16T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.971377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.971452 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.971475 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.971505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:58 crc kubenswrapper[4835]: I0216 15:08:58.971645 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:58Z","lastTransitionTime":"2026-02-16T15:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.074241 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.074307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.074329 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.074357 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.074376 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:59Z","lastTransitionTime":"2026-02-16T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.177799 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.177862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.177879 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.177902 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.177919 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:59Z","lastTransitionTime":"2026-02-16T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.281005 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.281063 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.281085 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.281111 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.281132 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:59Z","lastTransitionTime":"2026-02-16T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.372188 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 08:57:54.121446211 +0000 UTC Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.378662 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.378738 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:08:59 crc kubenswrapper[4835]: E0216 15:08:59.378852 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.378890 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:08:59 crc kubenswrapper[4835]: E0216 15:08:59.379080 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:08:59 crc kubenswrapper[4835]: E0216 15:08:59.379350 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.385081 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.385574 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.385789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.386032 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.386251 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:59Z","lastTransitionTime":"2026-02-16T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.490395 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.490495 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.490512 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.490582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.490601 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:59Z","lastTransitionTime":"2026-02-16T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.594034 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.594098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.594115 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.594142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.594161 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:59Z","lastTransitionTime":"2026-02-16T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.697783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.697855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.697877 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.697907 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.697929 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:59Z","lastTransitionTime":"2026-02-16T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.801236 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.801299 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.801316 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.801342 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.801362 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:59Z","lastTransitionTime":"2026-02-16T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.904728 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.905058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.905201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.905334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:08:59 crc kubenswrapper[4835]: I0216 15:08:59.905452 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:08:59Z","lastTransitionTime":"2026-02-16T15:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.008021 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.008335 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.008443 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.008564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.008681 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:00Z","lastTransitionTime":"2026-02-16T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.111387 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.111429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.111444 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.111463 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.111477 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:00Z","lastTransitionTime":"2026-02-16T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.214414 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.214462 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.214478 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.214500 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.214517 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:00Z","lastTransitionTime":"2026-02-16T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.317116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.317158 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.317169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.317184 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.317196 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:00Z","lastTransitionTime":"2026-02-16T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.373003 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:46:15.215625695 +0000 UTC Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.378416 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:00 crc kubenswrapper[4835]: E0216 15:09:00.378618 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.379789 4835 scope.go:117] "RemoveContainer" containerID="fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1" Feb 16 15:09:00 crc kubenswrapper[4835]: E0216 15:09:00.380110 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.419823 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.419890 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.419911 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.419941 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.419964 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:00Z","lastTransitionTime":"2026-02-16T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.523040 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.523088 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.523101 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.523117 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.523128 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:00Z","lastTransitionTime":"2026-02-16T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.626276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.626363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.626387 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.626432 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.626455 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:00Z","lastTransitionTime":"2026-02-16T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.728675 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.728740 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.728762 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.728837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.728868 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:00Z","lastTransitionTime":"2026-02-16T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.833224 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.833336 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.833354 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.833377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.833395 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:00Z","lastTransitionTime":"2026-02-16T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.936778 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.936828 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.936840 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.936856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:00 crc kubenswrapper[4835]: I0216 15:09:00.936869 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:00Z","lastTransitionTime":"2026-02-16T15:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.040252 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.040302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.040315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.040334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.040349 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:01Z","lastTransitionTime":"2026-02-16T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.143751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.143805 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.143817 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.143831 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.143842 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:01Z","lastTransitionTime":"2026-02-16T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.245717 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.245764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.245812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.245834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.245848 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:01Z","lastTransitionTime":"2026-02-16T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.348279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.348367 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.348386 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.348410 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.348428 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:01Z","lastTransitionTime":"2026-02-16T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.373675 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 17:14:47.590533065 +0000 UTC Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.378309 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:01 crc kubenswrapper[4835]: E0216 15:09:01.378407 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.378489 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:01 crc kubenswrapper[4835]: E0216 15:09:01.378570 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.378663 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:01 crc kubenswrapper[4835]: E0216 15:09:01.378716 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.402114 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.419097 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.437873 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.451255 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.451319 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.451340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.451360 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.451375 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:01Z","lastTransitionTime":"2026-02-16T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.451353 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.466712 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.479898 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.505411 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.527051 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.540738 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.552250 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.553837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.554007 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.554102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.554193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.554277 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:01Z","lastTransitionTime":"2026-02-16T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.563101 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b89273a-90a8-45ed-9ec1-6add78232d92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa003919dff9e8b7f05d24f459d9bccc04359e6db580f3bb4a311fefc6b515dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b01f2bacc518a9e185c169def3bc6764e1ec4512af82519ea94a2f41f5cc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b01f2bacc518a9e185c169def3bc6764e1ec4512af82519ea94a2f41f5cc5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.574751 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.592579 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.603902 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.622318 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edb148cc65ee2949251ab04a07a1827852b3de552110178d05456e30d5a8d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:39Z\\\",\\\"message\\\":\\\"2026-02-16T15:07:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9\\\\n2026-02-16T15:07:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9 to /host/opt/cni/bin/\\\\n2026-02-16T15:07:54Z [verbose] multus-daemon started\\\\n2026-02-16T15:07:54Z [verbose] Readiness Indicator file check\\\\n2026-02-16T15:08:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.635694 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87cacd68-0bbf-43de-bae3-e9ed31d19fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e174cfe63ecdfdaaa7051f8af8164e00f8295e42caf803bfe07fe758999af296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f206bec33670fb3e912d933cf602a51c92b99fba2802d3c1fe79b1cd920c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b486eb5e8108cd7a9fb09f21e0bb25f8483521b95acbbb42bbb1b7078fc8c030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.650905 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.656276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.656343 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.656351 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.656364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.656372 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:01Z","lastTransitionTime":"2026-02-16T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.672916 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.702508 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:45Z\\\",\\\"message\\\":\\\"c\\\\nI0216 15:08:45.320856 6957 pods.go:252] [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] addLogicalPort took 995.288µs, libovsdb time 502.214µs\\\\nI0216 15:08:45.320864 6957 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g after 0 failed attempt(s)\\\\nI0216 15:08:45.320869 6957 default_network_controller.go:776] Recording success event on pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 15:08:45.320884 6957 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" but failed to find it\\\\nF0216 15:08:45.320889 6957 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:01Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.759579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.759649 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.759669 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.759693 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.759710 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:01Z","lastTransitionTime":"2026-02-16T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.864932 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.864981 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.864995 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.865012 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.865025 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:01Z","lastTransitionTime":"2026-02-16T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.967249 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.967324 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.967346 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.967370 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:01 crc kubenswrapper[4835]: I0216 15:09:01.967388 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:01Z","lastTransitionTime":"2026-02-16T15:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.070746 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.070804 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.070819 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.070842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.070856 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:02Z","lastTransitionTime":"2026-02-16T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.173023 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.173057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.173068 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.173083 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.173094 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:02Z","lastTransitionTime":"2026-02-16T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.274977 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.275012 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.275020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.275032 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.275058 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:02Z","lastTransitionTime":"2026-02-16T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.374818 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:54:40.246582712 +0000 UTC Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.377893 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.377959 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.377978 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.377986 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.377998 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.378007 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:02Z","lastTransitionTime":"2026-02-16T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:02 crc kubenswrapper[4835]: E0216 15:09:02.378085 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.480655 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.480695 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.480704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.480720 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.480729 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:02Z","lastTransitionTime":"2026-02-16T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.582794 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.582843 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.582854 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.582871 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.582885 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:02Z","lastTransitionTime":"2026-02-16T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.685430 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.685467 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.685480 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.685499 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.685512 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:02Z","lastTransitionTime":"2026-02-16T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.788176 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.788256 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.788281 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.788309 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.788325 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:02Z","lastTransitionTime":"2026-02-16T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.891370 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.891842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.892092 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.892287 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.892459 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:02Z","lastTransitionTime":"2026-02-16T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.996029 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.996093 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.996115 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.996146 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:02 crc kubenswrapper[4835]: I0216 15:09:02.996169 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:02Z","lastTransitionTime":"2026-02-16T15:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.099043 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.099077 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.099086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.099099 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.099107 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:03Z","lastTransitionTime":"2026-02-16T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.201476 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.201562 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.201579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.201606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.201623 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:03Z","lastTransitionTime":"2026-02-16T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.304443 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.304498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.304508 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.304520 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.304540 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:03Z","lastTransitionTime":"2026-02-16T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.375638 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 19:43:25.921854957 +0000 UTC Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.378188 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.378189 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.378359 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:03 crc kubenswrapper[4835]: E0216 15:09:03.378584 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:03 crc kubenswrapper[4835]: E0216 15:09:03.378698 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:03 crc kubenswrapper[4835]: E0216 15:09:03.378809 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.407763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.407810 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.407832 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.407857 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.407876 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:03Z","lastTransitionTime":"2026-02-16T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.511018 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.511071 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.511080 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.511104 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.511118 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:03Z","lastTransitionTime":"2026-02-16T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.614632 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.614729 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.614754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.614791 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.614817 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:03Z","lastTransitionTime":"2026-02-16T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.718623 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.718743 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.718854 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.718933 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.718974 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:03Z","lastTransitionTime":"2026-02-16T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.821226 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.821288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.821317 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.821362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.821382 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:03Z","lastTransitionTime":"2026-02-16T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.924582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.924631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.924648 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.924668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:03 crc kubenswrapper[4835]: I0216 15:09:03.924681 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:03Z","lastTransitionTime":"2026-02-16T15:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.027856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.027904 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.027922 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.027942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.027955 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:04Z","lastTransitionTime":"2026-02-16T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.130686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.130725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.130734 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.130749 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.130760 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:04Z","lastTransitionTime":"2026-02-16T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.233388 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.233421 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.233430 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.233444 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.233453 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:04Z","lastTransitionTime":"2026-02-16T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.336025 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.336063 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.336076 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.336093 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.336109 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:04Z","lastTransitionTime":"2026-02-16T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.376808 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 15:59:08.804768869 +0000 UTC Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.377919 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:04 crc kubenswrapper[4835]: E0216 15:09:04.378015 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.439207 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.439251 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.439264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.439279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.439291 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:04Z","lastTransitionTime":"2026-02-16T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.542708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.542762 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.542789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.542810 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.542824 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:04Z","lastTransitionTime":"2026-02-16T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.645737 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.645805 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.645825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.645853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.645869 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:04Z","lastTransitionTime":"2026-02-16T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.748560 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.748626 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.748651 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.748682 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.748708 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:04Z","lastTransitionTime":"2026-02-16T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.851000 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.851057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.851066 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.851079 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.851088 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:04Z","lastTransitionTime":"2026-02-16T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.953921 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.953955 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.953964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.953978 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:04 crc kubenswrapper[4835]: I0216 15:09:04.953987 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:04Z","lastTransitionTime":"2026-02-16T15:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.056496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.056575 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.056584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.056598 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.056608 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:05Z","lastTransitionTime":"2026-02-16T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.158964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.159010 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.159020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.159039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.159052 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:05Z","lastTransitionTime":"2026-02-16T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.260946 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.261008 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.261031 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.261057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.261079 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:05Z","lastTransitionTime":"2026-02-16T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.363726 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.363796 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.363809 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.363827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.363836 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:05Z","lastTransitionTime":"2026-02-16T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.377348 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:22:31.851366342 +0000 UTC Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.377707 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:05 crc kubenswrapper[4835]: E0216 15:09:05.377804 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.377862 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.377903 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:05 crc kubenswrapper[4835]: E0216 15:09:05.378003 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:05 crc kubenswrapper[4835]: E0216 15:09:05.378140 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.466071 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.466112 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.466123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.466139 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.466148 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:05Z","lastTransitionTime":"2026-02-16T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.568623 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.568659 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.568668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.568681 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.568690 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:05Z","lastTransitionTime":"2026-02-16T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.671483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.671761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.671769 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.671783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.671797 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:05Z","lastTransitionTime":"2026-02-16T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.774825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.774878 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.774890 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.774903 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.774916 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:05Z","lastTransitionTime":"2026-02-16T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.877850 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.878432 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.878647 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.878804 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.879010 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:05Z","lastTransitionTime":"2026-02-16T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.982108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.982154 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.982166 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.982183 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:05 crc kubenswrapper[4835]: I0216 15:09:05.982195 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:05Z","lastTransitionTime":"2026-02-16T15:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.086463 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.087325 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.087404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.087477 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.087563 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:06Z","lastTransitionTime":"2026-02-16T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.190007 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.190046 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.190056 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.190070 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.190081 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:06Z","lastTransitionTime":"2026-02-16T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.294472 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.294512 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.294521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.294560 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.294571 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:06Z","lastTransitionTime":"2026-02-16T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.377849 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:41:43.104842838 +0000 UTC Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.378043 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:06 crc kubenswrapper[4835]: E0216 15:09:06.378972 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.398404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.398458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.398477 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.398502 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.398520 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:06Z","lastTransitionTime":"2026-02-16T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.501070 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.501163 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.501187 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.501217 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.501239 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:06Z","lastTransitionTime":"2026-02-16T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.603551 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.603579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.603587 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.603600 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.603609 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:06Z","lastTransitionTime":"2026-02-16T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.705647 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.705682 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.705690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.705703 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.705712 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:06Z","lastTransitionTime":"2026-02-16T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.808930 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.808966 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.808974 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.808988 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.808997 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:06Z","lastTransitionTime":"2026-02-16T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.911614 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.911678 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.911699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.911726 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:06 crc kubenswrapper[4835]: I0216 15:09:06.911750 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:06Z","lastTransitionTime":"2026-02-16T15:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.013890 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.013925 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.013933 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.013949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.013958 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:07Z","lastTransitionTime":"2026-02-16T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.116693 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.116762 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.116784 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.117074 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.117126 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:07Z","lastTransitionTime":"2026-02-16T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.220807 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.220849 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.220857 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.220871 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.220885 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:07Z","lastTransitionTime":"2026-02-16T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.323688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.323811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.323838 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.323876 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.323899 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:07Z","lastTransitionTime":"2026-02-16T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.378874 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:07 crc kubenswrapper[4835]: E0216 15:09:07.379172 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.379326 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:07 crc kubenswrapper[4835]: E0216 15:09:07.379406 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.379450 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 02:47:17.971349572 +0000 UTC Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.379487 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:07 crc kubenswrapper[4835]: E0216 15:09:07.379681 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.426209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.426256 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.426268 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.426284 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.426297 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:07Z","lastTransitionTime":"2026-02-16T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.528482 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.528589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.528607 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.528630 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.528649 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:07Z","lastTransitionTime":"2026-02-16T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.631679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.631736 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.631753 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.631771 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.631784 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:07Z","lastTransitionTime":"2026-02-16T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.733811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.733875 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.733893 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.733917 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.733935 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:07Z","lastTransitionTime":"2026-02-16T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.835826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.835861 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.835869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.835897 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.835908 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:07Z","lastTransitionTime":"2026-02-16T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.939639 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.939708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.939718 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.939733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:07 crc kubenswrapper[4835]: I0216 15:09:07.939744 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:07Z","lastTransitionTime":"2026-02-16T15:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.043060 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.043142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.043164 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.043194 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.043216 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:08Z","lastTransitionTime":"2026-02-16T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.145854 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.145932 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.145951 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.145980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.145997 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:08Z","lastTransitionTime":"2026-02-16T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.249816 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.249869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.249886 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.249909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.249926 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:08Z","lastTransitionTime":"2026-02-16T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.352916 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.352987 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.353010 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.353037 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.353059 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:08Z","lastTransitionTime":"2026-02-16T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.378863 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:08 crc kubenswrapper[4835]: E0216 15:09:08.379099 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.379718 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:30:28.021237594 +0000 UTC Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.455813 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.455865 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.455883 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.455905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.455926 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:08Z","lastTransitionTime":"2026-02-16T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.559266 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.559343 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.559363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.559388 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.559407 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:08Z","lastTransitionTime":"2026-02-16T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.662470 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.662525 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.662567 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.662590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.662610 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:08Z","lastTransitionTime":"2026-02-16T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.766160 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.766223 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.766245 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.766276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.766299 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:08Z","lastTransitionTime":"2026-02-16T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.869578 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.869640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.869660 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.869682 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.869698 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:08Z","lastTransitionTime":"2026-02-16T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.909494 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.909573 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.909589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.909611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.909630 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:08Z","lastTransitionTime":"2026-02-16T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:08 crc kubenswrapper[4835]: E0216 15:09:08.930479 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:08Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.936067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.936307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.936465 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.936762 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.937046 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:08Z","lastTransitionTime":"2026-02-16T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:08 crc kubenswrapper[4835]: E0216 15:09:08.959807 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:08Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.965384 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.965437 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.965456 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.965481 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.965503 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:08Z","lastTransitionTime":"2026-02-16T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:08 crc kubenswrapper[4835]: E0216 15:09:08.987013 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:08Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.992363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.992404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.992414 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.992431 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:08 crc kubenswrapper[4835]: I0216 15:09:08.992444 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:08Z","lastTransitionTime":"2026-02-16T15:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:09 crc kubenswrapper[4835]: E0216 15:09:09.010524 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.015448 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.015487 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.015496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.015511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.015542 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:09Z","lastTransitionTime":"2026-02-16T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:09 crc kubenswrapper[4835]: E0216 15:09:09.036016 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T15:09:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T15:09:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee98f291-ae22-4f9b-b939-b249002beb8e\\\",\\\"systemUUID\\\":\\\"2a0d434c-dc43-4785-aa82-d9fa9aa7e7e9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:09Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:09 crc kubenswrapper[4835]: E0216 15:09:09.036764 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.039287 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.039556 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.039816 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.039998 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.040176 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:09Z","lastTransitionTime":"2026-02-16T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.142543 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.142940 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.143128 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.143392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.143656 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:09Z","lastTransitionTime":"2026-02-16T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.247276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.247343 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.247360 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.247386 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.247406 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:09Z","lastTransitionTime":"2026-02-16T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.350783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.350858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.350893 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.350922 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.350948 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:09Z","lastTransitionTime":"2026-02-16T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.378403 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.378606 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.378422 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:09 crc kubenswrapper[4835]: E0216 15:09:09.378877 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:09 crc kubenswrapper[4835]: E0216 15:09:09.379056 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:09 crc kubenswrapper[4835]: E0216 15:09:09.379248 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.380802 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 18:50:00.951038382 +0000 UTC Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.453855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.453932 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.453958 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.454468 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.454792 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:09Z","lastTransitionTime":"2026-02-16T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.557483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.557897 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.558182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.558453 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.558753 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:09Z","lastTransitionTime":"2026-02-16T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.661843 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.662189 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.662328 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.662463 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.662669 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:09Z","lastTransitionTime":"2026-02-16T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.765516 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.765960 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.766249 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.766849 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.767040 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:09Z","lastTransitionTime":"2026-02-16T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.869252 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.869704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.869934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.870128 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.870334 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:09Z","lastTransitionTime":"2026-02-16T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.973834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.973869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.973879 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.973896 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:09 crc kubenswrapper[4835]: I0216 15:09:09.973907 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:09Z","lastTransitionTime":"2026-02-16T15:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.076946 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.076993 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.077001 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.077017 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.077027 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:10Z","lastTransitionTime":"2026-02-16T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.179634 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.179694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.179710 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.179733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.179751 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:10Z","lastTransitionTime":"2026-02-16T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.237861 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs\") pod \"network-metrics-daemon-b5nkt\" (UID: \"5121c96d-796f-46b5-8889-b7e74c329b2f\") " pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:10 crc kubenswrapper[4835]: E0216 15:09:10.238093 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 15:09:10 crc kubenswrapper[4835]: E0216 15:09:10.238272 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs podName:5121c96d-796f-46b5-8889-b7e74c329b2f nodeName:}" failed. No retries permitted until 2026-02-16 15:10:14.238150612 +0000 UTC m=+163.530143517 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs") pod "network-metrics-daemon-b5nkt" (UID: "5121c96d-796f-46b5-8889-b7e74c329b2f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.282332 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.282409 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.282431 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.282459 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.282479 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:10Z","lastTransitionTime":"2026-02-16T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.377868 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:10 crc kubenswrapper[4835]: E0216 15:09:10.377992 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.380954 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:29:13.516293949 +0000 UTC Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.385069 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.385147 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.385167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.385194 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.385212 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:10Z","lastTransitionTime":"2026-02-16T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.488242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.488298 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.488314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.488336 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.488353 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:10Z","lastTransitionTime":"2026-02-16T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.592006 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.592414 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.592606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.592789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.593029 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:10Z","lastTransitionTime":"2026-02-16T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.696055 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.696134 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.696152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.696181 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.696201 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:10Z","lastTransitionTime":"2026-02-16T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.800420 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.800789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.800861 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.800892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.800917 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:10Z","lastTransitionTime":"2026-02-16T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.903692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.904118 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.904293 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.904470 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:10 crc kubenswrapper[4835]: I0216 15:09:10.904660 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:10Z","lastTransitionTime":"2026-02-16T15:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.008365 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.008446 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.008467 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.008496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.008519 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:11Z","lastTransitionTime":"2026-02-16T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.111712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.112027 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.112066 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.112097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.112123 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:11Z","lastTransitionTime":"2026-02-16T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.215476 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.215630 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.215651 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.215681 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.215698 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:11Z","lastTransitionTime":"2026-02-16T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.318734 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.318866 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.318887 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.318911 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.318928 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:11Z","lastTransitionTime":"2026-02-16T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.378314 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.378369 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:11 crc kubenswrapper[4835]: E0216 15:09:11.378623 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.378658 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:11 crc kubenswrapper[4835]: E0216 15:09:11.378825 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:11 crc kubenswrapper[4835]: E0216 15:09:11.379126 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.381732 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 16:31:50.321594294 +0000 UTC Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.400256 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.418012 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kklmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aa94d4d-554e-4fab-9df4-426bbaa96ea8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba09b44b9805c524347b6d42249492117729b6e96d2e6d970b1a2edaeb23a95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j8r8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kklmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.423449 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.423727 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.423970 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.424169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.424348 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:11Z","lastTransitionTime":"2026-02-16T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.435211 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5121c96d-796f-46b5-8889-b7e74c329b2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmkhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-b5nkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.470196 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7fbf37-0200-49e0-8779-37a8e92653e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6a90d45dc8773cf199e11eaf7484114b8edd545f20f087d39b1ccd6ef627eb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7958dc98067fb7ce153d58a18ebcdf65bf7ec2cfab473cdf9ade5642d4ac09e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617eb1ee6923a8ffc9f6a5910d8ea26dd17018c34f42541651de5da4062231fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9374ca49ce56f8ece53d7e67480b90ac78d12d532dec1e260b633a411ada59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b38ec1e36c142ca15f8a4b474e3e4f876b8dd5da5cb18f472176b161a0050e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0048fecf11f0d50a1a62c0395c118fec66984896c3b8d28c7ec038aac115bc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29c2e224a812c61d16b0d95406a0f1d66899cfb2d4fd0ce6dd61bf274a852d4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9ebd56e785a067af0e43e139214ce873f6d5a9e7c7db6b358b2a95d26f025c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.494377 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.520255 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea04945145e64900c72e7a36042b8dc6e008ff178a2578474ddc94476d16f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.526990 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.527043 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.527052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.527071 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.527082 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:11Z","lastTransitionTime":"2026-02-16T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.536896 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25ef07f-df59-41c2-8ad5-fe6bdc50345a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://345b9960e07d9869007aaecf9a71921d65f6e304634b8f12eeabe27669eeb653\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ebc03fe7523470d0f55718cdf92485d41419ce986ebc6d5bdfd10c28cc71c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89wn8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:08:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2bssf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.556843 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feb29cf6-38e6-43a9-a310-c19f6315f407\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"equestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570074 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771254471\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771254471\\\\\\\\\\\\\\\" (2026-02-16 14:07:51 +0000 UTC to 2027-02-16 14:07:51 +0000 UTC (now=2026-02-16 15:07:51.570056707 +0000 UTC))\\\\\\\"\\\\nI0216 15:07:51.570073 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570091 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0216 15:07:51.570113 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0216 15:07:51.570129 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0216 15:07:51.570146 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-310179610/tls.crt::/tmp/serving-cert-310179610/tls.key\\\\\\\"\\\\nI0216 15:07:51.570148 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0216 15:07:51.570219 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0216 15:07:51.570247 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0216 15:07:51.570266 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 15:07:51.570255 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0216 15:07:51.570237 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0216 15:07:51.570838 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.577066 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d41a5dd-b6df-4d93-aa0f-6bd1f4020c7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b0770e9778b01b5ba9b1390c2231f29cea94047cb55e9b3d4632dd76923f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4dd1739a87f62bd979b3871ecdb42d281971aaefa28eaf940fcecd657081323\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7ac9bd391479a2a073d87e4bac2788116836489fd3e7b0bfafd4764087e952\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.598373 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ab4fb274b39e4db0c3c506031adef0f1247f6fcbe749e3d707a8e6100983f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.616434 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vhqvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9fe09143-7647-46a2-9631-18ef4f37f58e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e735f069dc92a98ee41b02fd8e067f261ff1f6689bf5034c5b8105e23640e80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dhbf5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vhqvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.630227 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.630299 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.630323 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.630353 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.630375 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:11Z","lastTransitionTime":"2026-02-16T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.636136 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gncxk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edb148cc65ee2949251ab04a07a1827852b3de552110178d05456e30d5a8d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:39Z\\\",\\\"message\\\":\\\"2026-02-16T15:07:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9\\\\n2026-02-16T15:07:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b7fe692-d9df-4949-86bd-d38773aa14c9 to /host/opt/cni/bin/\\\\n2026-02-16T15:07:54Z [verbose] multus-daemon started\\\\n2026-02-16T15:07:54Z [verbose] Readiness Indicator file check\\\\n2026-02-16T15:08:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bnbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gncxk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.652877 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b89273a-90a8-45ed-9ec1-6add78232d92\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa003919dff9e8b7f05d24f459d9bccc04359e6db580f3bb4a311fefc6b515dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72b01f2bacc518a9e185c169def3bc6764e1ec4512af82519ea94a2f41f5cc5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72b01f2bacc518a9e185c169def3bc6764e1ec4512af82519ea94a2f41f5cc5a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.668148 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.684603 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f8bec6970967ce2c55204f866f0f4733624aae07ab609833d0b1d336528ed5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76d25d79ecacb558bf9639e02cb287548e9064e1ec2f09cb0fde7c1e914ea31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.717583 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a790a22-cc2f-414e-b43b-fd6df80d19da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T15:08:45Z\\\",\\\"message\\\":\\\"c\\\\nI0216 15:08:45.320856 6957 pods.go:252] [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] addLogicalPort took 995.288µs, libovsdb time 502.214µs\\\\nI0216 15:08:45.320864 6957 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g after 0 failed attempt(s)\\\\nI0216 15:08:45.320869 6957 default_network_controller.go:776] Recording success event on pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0216 15:08:45.320884 6957 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\\\\\" but failed to find it\\\\nF0216 15:08:45.320889 6957 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T15:08:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vrwvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nwz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.734000 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.734102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.734123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.734153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.734175 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:11Z","lastTransitionTime":"2026-02-16T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.740069 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87cacd68-0bbf-43de-bae3-e9ed31d19fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e174cfe63ecdfdaaa7051f8af8164e00f8295e42caf803bfe07fe758999af296\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f206bec33670fb3e912d933cf602a51c92b99fba2802d3c1fe79b1cd920c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b486eb5e8108cd7a9fb09f21e0bb25f8483521b95acbbb42bbb1b7078fc8c030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62ab69b75011b60e5129412fb46f7cabb2c3ac058ea41d4a6816f7dc9d4a54f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.756273 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d233f2c8-6963-48c1-889e-ef20f52ad5b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf25e725b5b31d42b47c4a4ce9d15f513fad83a549b398ab069eaf8db519eba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nd4kl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.779875 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91e35405-0016-467d-9081-272eba8c8aa1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T15:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e47c04274d21c174ebc97a73fce0f6037d2a1ed2548c0e6f5c80755e533768a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T15:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc95b3560545cbd0c9b860a4dd9fdffdd441948edf0551a7d8b4a16b51ab90ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e93fc901cbd6812e92abcea0b0f48a20448aaddef1ecc787cc4a604e5f842176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc13f16e09103670fb5a91a013c02e8ee0ff8767ea3af9c6343f08142c6f52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://756f30ab4190ca126e93553f604e894cee36aaebdec80f580da47dcba61f9f84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fdac3b2ffb872b5d2a6859f17f01d74adbf7f7d6eac55b0e4b4c2a7eaaa35ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f3b7472a657b6de2abd8f58a0e3fa07bfe5efbe97ebcd81fdd9d5c3e47de7cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T15:07:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T15:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-drnx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T15:07:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rq4qc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T15:09:11Z is after 2025-08-24T17:21:41Z" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.836261 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.836349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.836368 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.836392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.836410 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:11Z","lastTransitionTime":"2026-02-16T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.939012 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.939079 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.939095 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.939117 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:11 crc kubenswrapper[4835]: I0216 15:09:11.939134 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:11Z","lastTransitionTime":"2026-02-16T15:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.042659 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.042716 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.042733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.042755 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.042770 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:12Z","lastTransitionTime":"2026-02-16T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.145556 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.145584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.145592 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.145605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.145615 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:12Z","lastTransitionTime":"2026-02-16T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.249417 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.249467 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.249480 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.249496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.249508 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:12Z","lastTransitionTime":"2026-02-16T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.352563 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.352599 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.352610 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.352627 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.352639 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:12Z","lastTransitionTime":"2026-02-16T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.378687 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:12 crc kubenswrapper[4835]: E0216 15:09:12.378931 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.382061 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 03:52:41.2709859 +0000 UTC Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.454806 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.454853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.454864 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.454882 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.454897 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:12Z","lastTransitionTime":"2026-02-16T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.557010 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.557063 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.557072 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.557089 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.557100 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:12Z","lastTransitionTime":"2026-02-16T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.659873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.659901 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.659909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.659921 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.659930 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:12Z","lastTransitionTime":"2026-02-16T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.762292 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.762328 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.762341 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.762356 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.762368 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:12Z","lastTransitionTime":"2026-02-16T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.864831 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.864868 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.864876 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.864888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.864916 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:12Z","lastTransitionTime":"2026-02-16T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.968134 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.968189 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.968206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.968227 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:12 crc kubenswrapper[4835]: I0216 15:09:12.968243 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:12Z","lastTransitionTime":"2026-02-16T15:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.070475 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.070582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.070608 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.070635 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.070656 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:13Z","lastTransitionTime":"2026-02-16T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.172635 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.172693 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.172709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.172732 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.172749 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:13Z","lastTransitionTime":"2026-02-16T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.275353 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.275425 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.275442 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.275468 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.275485 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:13Z","lastTransitionTime":"2026-02-16T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.378064 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.378183 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.378204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.378214 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.378229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.378238 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:13Z","lastTransitionTime":"2026-02-16T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:13 crc kubenswrapper[4835]: E0216 15:09:13.378251 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.378519 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.378593 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:13 crc kubenswrapper[4835]: E0216 15:09:13.378609 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:13 crc kubenswrapper[4835]: E0216 15:09:13.378773 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.380050 4835 scope.go:117] "RemoveContainer" containerID="fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1" Feb 16 15:09:13 crc kubenswrapper[4835]: E0216 15:09:13.380323 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nwz6_openshift-ovn-kubernetes(9a790a22-cc2f-414e-b43b-fd6df80d19da)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.382426 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:31:45.286758224 +0000 UTC Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.481143 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.481190 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.481204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.481223 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.481238 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:13Z","lastTransitionTime":"2026-02-16T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.584198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.584268 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.584287 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.584310 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.584327 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:13Z","lastTransitionTime":"2026-02-16T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.687170 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.687230 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.687246 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.687270 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.687298 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:13Z","lastTransitionTime":"2026-02-16T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.789709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.789738 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.789746 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.789760 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.789768 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:13Z","lastTransitionTime":"2026-02-16T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.892702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.892744 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.892752 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.892766 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.892775 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:13Z","lastTransitionTime":"2026-02-16T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.995872 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.995907 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.995915 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.995929 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:13 crc kubenswrapper[4835]: I0216 15:09:13.995938 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:13Z","lastTransitionTime":"2026-02-16T15:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.098294 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.098376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.098389 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.098408 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.098419 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:14Z","lastTransitionTime":"2026-02-16T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.201394 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.201456 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.201476 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.201500 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.201518 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:14Z","lastTransitionTime":"2026-02-16T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.308165 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.308211 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.308223 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.308238 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.308249 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:14Z","lastTransitionTime":"2026-02-16T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.378643 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:14 crc kubenswrapper[4835]: E0216 15:09:14.379084 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.382820 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 20:28:39.647809775 +0000 UTC Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.410750 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.410813 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.410831 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.410856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.410896 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:14Z","lastTransitionTime":"2026-02-16T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.514638 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.514712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.514790 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.514822 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.514839 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:14Z","lastTransitionTime":"2026-02-16T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.618422 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.618482 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.618498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.618522 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.618592 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:14Z","lastTransitionTime":"2026-02-16T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.723577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.723642 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.723665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.723694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.723718 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:14Z","lastTransitionTime":"2026-02-16T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.826107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.826161 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.826178 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.826200 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.826220 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:14Z","lastTransitionTime":"2026-02-16T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.928871 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.928913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.928926 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.928943 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:14 crc kubenswrapper[4835]: I0216 15:09:14.928959 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:14Z","lastTransitionTime":"2026-02-16T15:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.031889 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.031947 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.031964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.031988 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.032006 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:15Z","lastTransitionTime":"2026-02-16T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.134478 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.134577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.134601 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.134629 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.134649 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:15Z","lastTransitionTime":"2026-02-16T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.237200 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.237268 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.237305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.237338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.237363 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:15Z","lastTransitionTime":"2026-02-16T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.339920 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.339960 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.339973 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.339988 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.340000 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:15Z","lastTransitionTime":"2026-02-16T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.378622 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:15 crc kubenswrapper[4835]: E0216 15:09:15.378784 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.378895 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.378622 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:15 crc kubenswrapper[4835]: E0216 15:09:15.379245 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:15 crc kubenswrapper[4835]: E0216 15:09:15.379328 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.383273 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 07:56:50.559322682 +0000 UTC Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.442680 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.442727 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.442747 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.442764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.442775 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:15Z","lastTransitionTime":"2026-02-16T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.545605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.545691 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.545708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.545731 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.545747 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:15Z","lastTransitionTime":"2026-02-16T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.648417 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.648472 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.648490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.648513 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.648562 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:15Z","lastTransitionTime":"2026-02-16T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.750607 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.750848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.750872 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.750892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.750907 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:15Z","lastTransitionTime":"2026-02-16T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.852916 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.852960 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.852971 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.852989 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.853003 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:15Z","lastTransitionTime":"2026-02-16T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.955568 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.955626 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.955639 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.955659 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:15 crc kubenswrapper[4835]: I0216 15:09:15.955670 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:15Z","lastTransitionTime":"2026-02-16T15:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.057759 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.057823 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.057839 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.057862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.057878 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:16Z","lastTransitionTime":"2026-02-16T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.160275 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.160338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.160349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.160365 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.160376 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:16Z","lastTransitionTime":"2026-02-16T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.262590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.262629 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.262640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.262658 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.262669 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:16Z","lastTransitionTime":"2026-02-16T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.365290 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.365331 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.365340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.365355 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.365366 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:16Z","lastTransitionTime":"2026-02-16T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.377982 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:16 crc kubenswrapper[4835]: E0216 15:09:16.378131 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.384276 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 10:45:57.785254304 +0000 UTC Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.468301 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.468347 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.468360 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.468376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.468387 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:16Z","lastTransitionTime":"2026-02-16T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.570972 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.571051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.571067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.571088 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.571105 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:16Z","lastTransitionTime":"2026-02-16T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.673780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.673822 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.673837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.673850 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.673859 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:16Z","lastTransitionTime":"2026-02-16T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.775987 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.776166 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.776175 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.776187 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.776198 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:16Z","lastTransitionTime":"2026-02-16T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.878473 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.878521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.878555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.878608 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.878621 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:16Z","lastTransitionTime":"2026-02-16T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.980992 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.981050 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.981059 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.981071 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:16 crc kubenswrapper[4835]: I0216 15:09:16.981079 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:16Z","lastTransitionTime":"2026-02-16T15:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.083325 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.083354 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.083362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.083402 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.083413 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:17Z","lastTransitionTime":"2026-02-16T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.185961 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.186002 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.186020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.186040 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.186051 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:17Z","lastTransitionTime":"2026-02-16T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.288578 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.288626 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.288637 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.288671 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.288684 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:17Z","lastTransitionTime":"2026-02-16T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.378295 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.378358 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.378378 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:17 crc kubenswrapper[4835]: E0216 15:09:17.378463 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:17 crc kubenswrapper[4835]: E0216 15:09:17.378561 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:17 crc kubenswrapper[4835]: E0216 15:09:17.378621 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.384861 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:46:17.842680765 +0000 UTC Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.390584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.390619 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.390630 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.390646 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.390654 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:17Z","lastTransitionTime":"2026-02-16T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.493065 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.493096 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.493106 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.493118 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.493128 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:17Z","lastTransitionTime":"2026-02-16T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.596163 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.596201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.596210 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.596225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.596234 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:17Z","lastTransitionTime":"2026-02-16T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.698851 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.698937 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.698963 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.698997 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.699023 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:17Z","lastTransitionTime":"2026-02-16T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.801119 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.801161 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.801171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.801190 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.801202 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:17Z","lastTransitionTime":"2026-02-16T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.903571 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.903614 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.903624 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.903641 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:17 crc kubenswrapper[4835]: I0216 15:09:17.903653 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:17Z","lastTransitionTime":"2026-02-16T15:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.006370 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.006409 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.006417 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.006430 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.006442 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:18Z","lastTransitionTime":"2026-02-16T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.108175 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.108220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.108231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.108249 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.108259 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:18Z","lastTransitionTime":"2026-02-16T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.211095 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.211153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.211170 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.211196 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.211213 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:18Z","lastTransitionTime":"2026-02-16T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.314302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.314357 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.314370 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.314388 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.314401 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:18Z","lastTransitionTime":"2026-02-16T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.378477 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:18 crc kubenswrapper[4835]: E0216 15:09:18.378738 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.385649 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:02:19.506864603 +0000 UTC Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.416700 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.416742 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.416754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.416770 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.416782 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:18Z","lastTransitionTime":"2026-02-16T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.519365 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.519401 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.519413 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.519428 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.519439 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:18Z","lastTransitionTime":"2026-02-16T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.622515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.622604 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.622623 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.622645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.622662 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:18Z","lastTransitionTime":"2026-02-16T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.725450 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.725511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.725559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.725584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.725607 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:18Z","lastTransitionTime":"2026-02-16T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.829296 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.829414 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.829433 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.829454 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.829474 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:18Z","lastTransitionTime":"2026-02-16T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.932972 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.933020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.933039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.933065 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:18 crc kubenswrapper[4835]: I0216 15:09:18.933080 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:18Z","lastTransitionTime":"2026-02-16T15:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.035887 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.035918 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.035928 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.035944 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.035982 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:19Z","lastTransitionTime":"2026-02-16T15:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.138606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.138634 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.138642 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.138654 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.138679 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:19Z","lastTransitionTime":"2026-02-16T15:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.225319 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.225397 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.225420 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.225448 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.225470 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:19Z","lastTransitionTime":"2026-02-16T15:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.260855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.260914 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.260936 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.260964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.261005 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T15:09:19Z","lastTransitionTime":"2026-02-16T15:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.298388 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45"] Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.299142 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.300878 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.300967 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.301902 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.302028 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.341736 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/efa940d3-5765-4f78-9bfc-9816c4608263-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gfc45\" (UID: \"efa940d3-5765-4f78-9bfc-9816c4608263\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.341875 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/efa940d3-5765-4f78-9bfc-9816c4608263-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gfc45\" (UID: \"efa940d3-5765-4f78-9bfc-9816c4608263\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.341934 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efa940d3-5765-4f78-9bfc-9816c4608263-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gfc45\" (UID: \"efa940d3-5765-4f78-9bfc-9816c4608263\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.341968 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efa940d3-5765-4f78-9bfc-9816c4608263-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gfc45\" (UID: \"efa940d3-5765-4f78-9bfc-9816c4608263\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.342053 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/efa940d3-5765-4f78-9bfc-9816c4608263-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gfc45\" (UID: \"efa940d3-5765-4f78-9bfc-9816c4608263\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.362177 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=88.362148422 podStartE2EDuration="1m28.362148422s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:19.359905721 +0000 UTC m=+108.651898676" watchObservedRunningTime="2026-02-16 15:09:19.362148422 +0000 UTC m=+108.654141357" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.377979 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.377976 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.378750 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:19 crc kubenswrapper[4835]: E0216 15:09:19.379065 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:19 crc kubenswrapper[4835]: E0216 15:09:19.379777 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:19 crc kubenswrapper[4835]: E0216 15:09:19.379937 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.385928 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:49:34.838080203 +0000 UTC Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.385990 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.399199 4835 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.443147 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/efa940d3-5765-4f78-9bfc-9816c4608263-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gfc45\" (UID: \"efa940d3-5765-4f78-9bfc-9816c4608263\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.443219 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efa940d3-5765-4f78-9bfc-9816c4608263-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gfc45\" (UID: \"efa940d3-5765-4f78-9bfc-9816c4608263\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.443253 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efa940d3-5765-4f78-9bfc-9816c4608263-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gfc45\" (UID: \"efa940d3-5765-4f78-9bfc-9816c4608263\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.443285 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/efa940d3-5765-4f78-9bfc-9816c4608263-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gfc45\" (UID: \"efa940d3-5765-4f78-9bfc-9816c4608263\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.443339 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/efa940d3-5765-4f78-9bfc-9816c4608263-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gfc45\" (UID: \"efa940d3-5765-4f78-9bfc-9816c4608263\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.443440 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/efa940d3-5765-4f78-9bfc-9816c4608263-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gfc45\" (UID: \"efa940d3-5765-4f78-9bfc-9816c4608263\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.443504 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/efa940d3-5765-4f78-9bfc-9816c4608263-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gfc45\" (UID: \"efa940d3-5765-4f78-9bfc-9816c4608263\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.444974 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efa940d3-5765-4f78-9bfc-9816c4608263-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gfc45\" (UID: \"efa940d3-5765-4f78-9bfc-9816c4608263\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.445090 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=87.445073189 podStartE2EDuration="1m27.445073189s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:19.43959039 +0000 UTC m=+108.731583295" watchObservedRunningTime="2026-02-16 15:09:19.445073189 +0000 UTC m=+108.737066084" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.445227 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-kklmz" podStartSLOduration=88.445222913 podStartE2EDuration="1m28.445222913s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:19.422864612 +0000 UTC m=+108.714857547" watchObservedRunningTime="2026-02-16 15:09:19.445222913 +0000 UTC m=+108.737215808" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.450069 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efa940d3-5765-4f78-9bfc-9816c4608263-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gfc45\" (UID: \"efa940d3-5765-4f78-9bfc-9816c4608263\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.462241 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/efa940d3-5765-4f78-9bfc-9816c4608263-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gfc45\" (UID: \"efa940d3-5765-4f78-9bfc-9816c4608263\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.476831 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.476815047 podStartE2EDuration="1m28.476815047s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:19.464677435 +0000 UTC m=+108.756670340" watchObservedRunningTime="2026-02-16 15:09:19.476815047 +0000 UTC m=+108.768807942" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.494343 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2bssf" podStartSLOduration=87.494318165 podStartE2EDuration="1m27.494318165s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:19.494175002 +0000 UTC m=+108.786167907" watchObservedRunningTime="2026-02-16 15:09:19.494318165 +0000 UTC m=+108.786311070" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.505035 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=34.505019528 podStartE2EDuration="34.505019528s" podCreationTimestamp="2026-02-16 15:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:19.504637618 +0000 UTC m=+108.796630523" watchObservedRunningTime="2026-02-16 15:09:19.505019528 +0000 UTC m=+108.797012443" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.565173 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vhqvm" podStartSLOduration=88.565139641 podStartE2EDuration="1m28.565139641s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:19.543597523 +0000 UTC m=+108.835590428" watchObservedRunningTime="2026-02-16 15:09:19.565139641 +0000 UTC m=+108.857132576" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.579202 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gncxk" podStartSLOduration=88.579184625 podStartE2EDuration="1m28.579184625s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:19.56656932 +0000 UTC m=+108.858562225" watchObservedRunningTime="2026-02-16 15:09:19.579184625 +0000 UTC m=+108.871177530" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.597190 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podStartSLOduration=88.597166067 podStartE2EDuration="1m28.597166067s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:19.597057764 +0000 UTC m=+108.889050699" watchObservedRunningTime="2026-02-16 15:09:19.597166067 +0000 UTC m=+108.889158972" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.598070 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=59.598063801 podStartE2EDuration="59.598063801s" podCreationTimestamp="2026-02-16 15:08:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:19.580060619 +0000 UTC m=+108.872053514" watchObservedRunningTime="2026-02-16 15:09:19.598063801 +0000 UTC m=+108.890056696" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.615505 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" Feb 16 15:09:19 crc kubenswrapper[4835]: I0216 15:09:19.616248 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rq4qc" podStartSLOduration=88.616229097 podStartE2EDuration="1m28.616229097s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:19.615647671 +0000 UTC m=+108.907640576" watchObservedRunningTime="2026-02-16 15:09:19.616229097 +0000 UTC m=+108.908222002" Feb 16 15:09:20 crc kubenswrapper[4835]: I0216 15:09:20.306504 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" event={"ID":"efa940d3-5765-4f78-9bfc-9816c4608263","Type":"ContainerStarted","Data":"15975d2a75d87e889779c594b72f27ba4f506c4c98f1709c2b10731ce7687838"} Feb 16 15:09:20 crc kubenswrapper[4835]: I0216 15:09:20.306563 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" event={"ID":"efa940d3-5765-4f78-9bfc-9816c4608263","Type":"ContainerStarted","Data":"50cc619f6056b57efc4c4161c905c7ecbf9d391d4e61c7748d996f7da49249f6"} Feb 16 15:09:20 crc kubenswrapper[4835]: I0216 15:09:20.378301 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:20 crc kubenswrapper[4835]: E0216 15:09:20.378914 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:21 crc kubenswrapper[4835]: I0216 15:09:21.378366 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:21 crc kubenswrapper[4835]: I0216 15:09:21.378479 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:21 crc kubenswrapper[4835]: I0216 15:09:21.378366 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:21 crc kubenswrapper[4835]: E0216 15:09:21.379577 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:21 crc kubenswrapper[4835]: E0216 15:09:21.379671 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:21 crc kubenswrapper[4835]: E0216 15:09:21.379733 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:22 crc kubenswrapper[4835]: I0216 15:09:22.378631 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:22 crc kubenswrapper[4835]: E0216 15:09:22.378917 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:23 crc kubenswrapper[4835]: I0216 15:09:23.382226 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:23 crc kubenswrapper[4835]: I0216 15:09:23.382305 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:23 crc kubenswrapper[4835]: I0216 15:09:23.382378 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:23 crc kubenswrapper[4835]: E0216 15:09:23.382427 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:23 crc kubenswrapper[4835]: E0216 15:09:23.382674 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:23 crc kubenswrapper[4835]: E0216 15:09:23.382855 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:24 crc kubenswrapper[4835]: I0216 15:09:24.378083 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:24 crc kubenswrapper[4835]: E0216 15:09:24.378330 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:25 crc kubenswrapper[4835]: I0216 15:09:25.378456 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:25 crc kubenswrapper[4835]: E0216 15:09:25.378598 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:25 crc kubenswrapper[4835]: I0216 15:09:25.378787 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:25 crc kubenswrapper[4835]: E0216 15:09:25.378849 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:25 crc kubenswrapper[4835]: I0216 15:09:25.378956 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:25 crc kubenswrapper[4835]: E0216 15:09:25.379024 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:26 crc kubenswrapper[4835]: I0216 15:09:26.331704 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gncxk_36a4edb0-ce1a-4b59-b1f9-f5b43255de2d/kube-multus/1.log" Feb 16 15:09:26 crc kubenswrapper[4835]: I0216 15:09:26.334090 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gncxk_36a4edb0-ce1a-4b59-b1f9-f5b43255de2d/kube-multus/0.log" Feb 16 15:09:26 crc kubenswrapper[4835]: I0216 15:09:26.334184 4835 generic.go:334] "Generic (PLEG): container finished" podID="36a4edb0-ce1a-4b59-b1f9-f5b43255de2d" containerID="7edb148cc65ee2949251ab04a07a1827852b3de552110178d05456e30d5a8d04" exitCode=1 Feb 16 15:09:26 crc kubenswrapper[4835]: I0216 15:09:26.334235 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gncxk" event={"ID":"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d","Type":"ContainerDied","Data":"7edb148cc65ee2949251ab04a07a1827852b3de552110178d05456e30d5a8d04"} Feb 16 15:09:26 crc kubenswrapper[4835]: I0216 15:09:26.334286 4835 scope.go:117] "RemoveContainer" containerID="de5db76cef0fea2cb0fff8d4f4f55106b38e8dcaf52604740d953c00045d58ce" Feb 16 15:09:26 crc kubenswrapper[4835]: I0216 15:09:26.334973 4835 scope.go:117] "RemoveContainer" containerID="7edb148cc65ee2949251ab04a07a1827852b3de552110178d05456e30d5a8d04" Feb 16 15:09:26 crc kubenswrapper[4835]: E0216 15:09:26.335247 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gncxk_openshift-multus(36a4edb0-ce1a-4b59-b1f9-f5b43255de2d)\"" pod="openshift-multus/multus-gncxk" podUID="36a4edb0-ce1a-4b59-b1f9-f5b43255de2d" Feb 16 15:09:26 crc kubenswrapper[4835]: I0216 15:09:26.368237 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gfc45" podStartSLOduration=95.368212372 podStartE2EDuration="1m35.368212372s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:20.334789129 +0000 UTC m=+109.626782094" watchObservedRunningTime="2026-02-16 15:09:26.368212372 +0000 UTC m=+115.660205297" Feb 16 15:09:26 crc kubenswrapper[4835]: I0216 15:09:26.378128 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:26 crc kubenswrapper[4835]: E0216 15:09:26.379471 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:27 crc kubenswrapper[4835]: I0216 15:09:27.342404 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gncxk_36a4edb0-ce1a-4b59-b1f9-f5b43255de2d/kube-multus/1.log" Feb 16 15:09:27 crc kubenswrapper[4835]: I0216 15:09:27.378827 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:27 crc kubenswrapper[4835]: I0216 15:09:27.378876 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:27 crc kubenswrapper[4835]: E0216 15:09:27.379044 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:27 crc kubenswrapper[4835]: I0216 15:09:27.379308 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:27 crc kubenswrapper[4835]: E0216 15:09:27.379406 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:27 crc kubenswrapper[4835]: E0216 15:09:27.379707 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:28 crc kubenswrapper[4835]: I0216 15:09:28.378578 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:28 crc kubenswrapper[4835]: E0216 15:09:28.379066 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:28 crc kubenswrapper[4835]: I0216 15:09:28.379248 4835 scope.go:117] "RemoveContainer" containerID="fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1" Feb 16 15:09:29 crc kubenswrapper[4835]: I0216 15:09:29.351269 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovnkube-controller/3.log" Feb 16 15:09:29 crc kubenswrapper[4835]: I0216 15:09:29.353446 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-b5nkt"] Feb 16 15:09:29 crc kubenswrapper[4835]: I0216 15:09:29.353546 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:29 crc kubenswrapper[4835]: E0216 15:09:29.353617 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:29 crc kubenswrapper[4835]: I0216 15:09:29.358359 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerStarted","Data":"bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a"} Feb 16 15:09:29 crc kubenswrapper[4835]: I0216 15:09:29.359016 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:09:29 crc kubenswrapper[4835]: I0216 15:09:29.378147 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:29 crc kubenswrapper[4835]: I0216 15:09:29.378183 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:29 crc kubenswrapper[4835]: E0216 15:09:29.378273 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:29 crc kubenswrapper[4835]: E0216 15:09:29.378387 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:29 crc kubenswrapper[4835]: I0216 15:09:29.406575 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" podStartSLOduration=97.406560075 podStartE2EDuration="1m37.406560075s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:29.405762523 +0000 UTC m=+118.697755418" watchObservedRunningTime="2026-02-16 15:09:29.406560075 +0000 UTC m=+118.698552970" Feb 16 15:09:30 crc kubenswrapper[4835]: I0216 15:09:30.378082 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:30 crc kubenswrapper[4835]: E0216 15:09:30.378184 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:31 crc kubenswrapper[4835]: E0216 15:09:31.326481 4835 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 16 15:09:31 crc kubenswrapper[4835]: I0216 15:09:31.378242 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:31 crc kubenswrapper[4835]: I0216 15:09:31.378277 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:31 crc kubenswrapper[4835]: I0216 15:09:31.380473 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:31 crc kubenswrapper[4835]: E0216 15:09:31.380684 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:31 crc kubenswrapper[4835]: E0216 15:09:31.380740 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:31 crc kubenswrapper[4835]: E0216 15:09:31.380851 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:31 crc kubenswrapper[4835]: E0216 15:09:31.486176 4835 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:09:32 crc kubenswrapper[4835]: I0216 15:09:32.377721 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:32 crc kubenswrapper[4835]: E0216 15:09:32.377827 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:33 crc kubenswrapper[4835]: I0216 15:09:33.377870 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:33 crc kubenswrapper[4835]: I0216 15:09:33.377931 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:33 crc kubenswrapper[4835]: I0216 15:09:33.377885 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:33 crc kubenswrapper[4835]: E0216 15:09:33.378079 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:33 crc kubenswrapper[4835]: E0216 15:09:33.378275 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:33 crc kubenswrapper[4835]: E0216 15:09:33.378433 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:34 crc kubenswrapper[4835]: I0216 15:09:34.377802 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:34 crc kubenswrapper[4835]: E0216 15:09:34.378100 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:35 crc kubenswrapper[4835]: I0216 15:09:35.377775 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:35 crc kubenswrapper[4835]: I0216 15:09:35.377844 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:35 crc kubenswrapper[4835]: E0216 15:09:35.378316 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:35 crc kubenswrapper[4835]: I0216 15:09:35.377915 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:35 crc kubenswrapper[4835]: E0216 15:09:35.378461 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:35 crc kubenswrapper[4835]: E0216 15:09:35.378675 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:36 crc kubenswrapper[4835]: I0216 15:09:36.378576 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:36 crc kubenswrapper[4835]: E0216 15:09:36.378793 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:36 crc kubenswrapper[4835]: E0216 15:09:36.488142 4835 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:09:37 crc kubenswrapper[4835]: I0216 15:09:37.377673 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:37 crc kubenswrapper[4835]: I0216 15:09:37.377779 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:37 crc kubenswrapper[4835]: E0216 15:09:37.377855 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:37 crc kubenswrapper[4835]: E0216 15:09:37.377991 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:37 crc kubenswrapper[4835]: I0216 15:09:37.378100 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:37 crc kubenswrapper[4835]: E0216 15:09:37.378195 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:38 crc kubenswrapper[4835]: I0216 15:09:38.378174 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:38 crc kubenswrapper[4835]: E0216 15:09:38.378339 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:39 crc kubenswrapper[4835]: I0216 15:09:39.378658 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:39 crc kubenswrapper[4835]: I0216 15:09:39.378713 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:39 crc kubenswrapper[4835]: I0216 15:09:39.378802 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:39 crc kubenswrapper[4835]: E0216 15:09:39.378882 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:39 crc kubenswrapper[4835]: E0216 15:09:39.378993 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:39 crc kubenswrapper[4835]: I0216 15:09:39.379615 4835 scope.go:117] "RemoveContainer" containerID="7edb148cc65ee2949251ab04a07a1827852b3de552110178d05456e30d5a8d04" Feb 16 15:09:39 crc kubenswrapper[4835]: E0216 15:09:39.379384 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:40 crc kubenswrapper[4835]: I0216 15:09:40.378669 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:40 crc kubenswrapper[4835]: E0216 15:09:40.379251 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 15:09:40 crc kubenswrapper[4835]: I0216 15:09:40.408746 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gncxk_36a4edb0-ce1a-4b59-b1f9-f5b43255de2d/kube-multus/1.log" Feb 16 15:09:40 crc kubenswrapper[4835]: I0216 15:09:40.408839 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gncxk" event={"ID":"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d","Type":"ContainerStarted","Data":"98f8e6d7b44084a40632591b1774ef5147c6f4e787ac6fb60321e2810fa9ec35"} Feb 16 15:09:41 crc kubenswrapper[4835]: I0216 15:09:41.378512 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:41 crc kubenswrapper[4835]: I0216 15:09:41.378517 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:41 crc kubenswrapper[4835]: E0216 15:09:41.378640 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 15:09:41 crc kubenswrapper[4835]: I0216 15:09:41.378719 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:41 crc kubenswrapper[4835]: E0216 15:09:41.379143 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 15:09:41 crc kubenswrapper[4835]: E0216 15:09:41.379494 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-b5nkt" podUID="5121c96d-796f-46b5-8889-b7e74c329b2f" Feb 16 15:09:42 crc kubenswrapper[4835]: I0216 15:09:42.377955 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:42 crc kubenswrapper[4835]: I0216 15:09:42.381629 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 15:09:42 crc kubenswrapper[4835]: I0216 15:09:42.382170 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 15:09:43 crc kubenswrapper[4835]: I0216 15:09:43.378782 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:09:43 crc kubenswrapper[4835]: I0216 15:09:43.378855 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:43 crc kubenswrapper[4835]: I0216 15:09:43.378810 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:43 crc kubenswrapper[4835]: I0216 15:09:43.383067 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 15:09:43 crc kubenswrapper[4835]: I0216 15:09:43.383121 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 15:09:43 crc kubenswrapper[4835]: I0216 15:09:43.383182 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 15:09:43 crc kubenswrapper[4835]: I0216 15:09:43.383199 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.428813 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.620238 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.668972 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ztcpg"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.669635 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.670523 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-swn24"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.671075 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.672301 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rcbfk"] Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.672397 4835 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-login": failed to list *v1.Secret: secrets "v4-0-config-user-template-login" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.672447 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-login\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.672919 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.674023 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-f6rdr"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.674657 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.675403 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.675834 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.676195 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.676225 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-94mgp"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.676720 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.676800 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.677189 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.677827 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.678081 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.678088 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.678463 4835 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.678496 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.678906 4835 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.678937 4835 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.678960 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.678972 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.679232 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.679373 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.680102 4835 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.680162 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.680245 4835 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.680266 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.680606 4835 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.680645 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.680715 4835 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.680737 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.680811 4835 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.680833 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.684623 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.685499 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zdm8w"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.685704 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.685991 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rqspp"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.686289 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zdm8w" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.686636 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rqspp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.687209 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-28xh9"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.688290 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.688392 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.689037 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.704950 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.707935 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.708670 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.711877 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.712111 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.712249 4835 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: secrets "machine-api-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.712293 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.722079 4835 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.722379 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.722801 4835 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.722865 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.723787 4835 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.723915 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.724089 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.724237 4835 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: secrets "openshift-apiserver-sa-dockercfg-djjff" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.724322 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-sa-dockercfg-djjff\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.724576 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.724843 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.727205 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.729629 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-ddnzw"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.730097 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.730358 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rflfs"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.730664 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.730688 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ddnzw" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.731200 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.731435 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rflfs" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.731732 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25tf8"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.732117 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25tf8" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.732516 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.732637 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.732754 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.732840 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.732913 4835 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.732934 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.732942 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.732965 4835 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.732982 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.733013 4835 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.733023 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.733016 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.733040 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.733101 4835 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.733107 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.733112 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.733169 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.733242 4835 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.733255 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.733296 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.733372 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.733479 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.733561 4835 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.733575 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.733606 4835 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.733617 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.733647 4835 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.733657 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.733689 4835 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.733700 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.733731 4835 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: secrets "machine-approver-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.733740 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.733786 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.733860 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.734616 4835 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.734657 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.734689 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.734825 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.734828 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.734933 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.735029 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.735083 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.735160 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.735195 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.735195 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cxxgb"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.735241 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.735274 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.735346 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.735354 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.735392 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 15:09:49 crc kubenswrapper[4835]: W0216 15:09:49.735404 4835 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": failed to list *v1.Secret: secrets "machine-approver-sa-dockercfg-nl2j4" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Feb 16 15:09:49 crc kubenswrapper[4835]: E0216 15:09:49.735416 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-nl2j4\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-sa-dockercfg-nl2j4\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.735169 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.735764 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cxxgb" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.736007 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.736112 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.736153 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.736299 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.736319 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.736378 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.736394 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.736476 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.736673 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpr5m"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.737001 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.737098 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpr5m" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.737172 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.737545 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hc4hn"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.737749 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.737849 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.738047 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hc4hn" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.738298 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.738434 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.738446 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.738462 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.740161 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpgbr"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.740651 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpgbr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.746907 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.747118 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.747298 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.747168 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-swn24"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.748388 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-82spz"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.749062 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.750645 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.751133 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.751153 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z7cdv"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.751566 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-z7cdv" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.751860 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.752128 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.752167 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.753186 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.761472 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.777131 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.777575 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.777776 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.778508 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.779391 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.780544 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rxmc7"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.781440 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.783595 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj9qp\" (UniqueName: \"kubernetes.io/projected/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-kube-api-access-cj9qp\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.783630 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-serving-cert\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.783656 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7841ce5-1f76-4df0-929d-c6573d4ff806-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-94mgp\" (UID: \"d7841ce5-1f76-4df0-929d-c6573d4ff806\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.783679 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/725def92-8cb7-4ac0-8a1d-f72d03f8f7ca-serving-cert\") pod \"openshift-config-operator-7777fb866f-4nrjr\" (UID: \"725def92-8cb7-4ac0-8a1d-f72d03f8f7ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.783697 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-trusted-ca-bundle\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.783714 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e964e5a5-fe00-4d83-8416-2e2bd64c359d-config\") pod \"route-controller-manager-6576b87f9c-n4srv\" (UID: \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.783745 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94a7fb98-0826-4559-9113-aad4415a7f21-encryption-config\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.784003 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94a7fb98-0826-4559-9113-aad4415a7f21-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.784025 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/064c5316-78a7-4320-a324-8ebe400e9db9-auth-proxy-config\") pod \"machine-approver-56656f9798-g9f7l\" (UID: \"064c5316-78a7-4320-a324-8ebe400e9db9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.784045 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e964e5a5-fe00-4d83-8416-2e2bd64c359d-client-ca\") pod \"route-controller-manager-6576b87f9c-n4srv\" (UID: \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.784065 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.790902 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.791168 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.791602 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.793964 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/601d76ac-3e65-4ef1-9291-cd0e647ab37a-config\") pod \"machine-api-operator-5694c8668f-f6rdr\" (UID: \"601d76ac-3e65-4ef1-9291-cd0e647ab37a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.795274 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.796467 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.807545 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/725def92-8cb7-4ac0-8a1d-f72d03f8f7ca-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4nrjr\" (UID: \"725def92-8cb7-4ac0-8a1d-f72d03f8f7ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.807588 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7841ce5-1f76-4df0-929d-c6573d4ff806-serving-cert\") pod \"authentication-operator-69f744f599-94mgp\" (UID: \"d7841ce5-1f76-4df0-929d-c6573d4ff806\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.807989 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d329678-3edc-4b70-9796-85c6ada120de-console-serving-cert\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.808173 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.808293 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xppq\" (UniqueName: \"kubernetes.io/projected/ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d-kube-api-access-9xppq\") pod \"openshift-apiserver-operator-796bbdcf4f-zdm8w\" (UID: \"ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zdm8w" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.808407 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-config\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.808490 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6j2w\" (UniqueName: \"kubernetes.io/projected/94a7fb98-0826-4559-9113-aad4415a7f21-kube-api-access-x6j2w\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.808606 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-etcd-client\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.808678 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-serving-cert\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.808747 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94c481c5-c791-453a-a03e-2eb7a130c132-audit-dir\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.808860 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/94c481c5-c791-453a-a03e-2eb7a130c132-node-pullsecrets\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.808939 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94a7fb98-0826-4559-9113-aad4415a7f21-audit-dir\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.809021 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.809108 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-console-config\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.809205 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.809287 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq7s8\" (UniqueName: \"kubernetes.io/projected/d7841ce5-1f76-4df0-929d-c6573d4ff806-kube-api-access-xq7s8\") pod \"authentication-operator-69f744f599-94mgp\" (UID: \"d7841ce5-1f76-4df0-929d-c6573d4ff806\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.809382 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-audit\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.809458 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-image-import-ca\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.809545 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-etcd-serving-ca\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.809622 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-audit-policies\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.809688 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rhzdd"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.809876 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.809985 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zdm8w\" (UID: \"ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zdm8w" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.810065 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-config\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.810141 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.810211 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.810332 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94a7fb98-0826-4559-9113-aad4415a7f21-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.810419 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rhzdd" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.810511 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7841ce5-1f76-4df0-929d-c6573d4ff806-config\") pod \"authentication-operator-69f744f599-94mgp\" (UID: \"d7841ce5-1f76-4df0-929d-c6573d4ff806\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.810629 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx7qb\" (UniqueName: \"kubernetes.io/projected/064c5316-78a7-4320-a324-8ebe400e9db9-kube-api-access-tx7qb\") pod \"machine-approver-56656f9798-g9f7l\" (UID: \"064c5316-78a7-4320-a324-8ebe400e9db9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.810708 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7841ce5-1f76-4df0-929d-c6573d4ff806-service-ca-bundle\") pod \"authentication-operator-69f744f599-94mgp\" (UID: \"d7841ce5-1f76-4df0-929d-c6573d4ff806\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.810816 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ckpx\" (UniqueName: \"kubernetes.io/projected/725def92-8cb7-4ac0-8a1d-f72d03f8f7ca-kube-api-access-9ckpx\") pod \"openshift-config-operator-7777fb866f-4nrjr\" (UID: \"725def92-8cb7-4ac0-8a1d-f72d03f8f7ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.810885 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3d329678-3edc-4b70-9796-85c6ada120de-console-oauth-config\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.816563 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.816879 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94a7fb98-0826-4559-9113-aad4415a7f21-etcd-client\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.816972 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frrr5\" (UniqueName: \"kubernetes.io/projected/94c481c5-c791-453a-a03e-2eb7a130c132-kube-api-access-frrr5\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.817060 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/601d76ac-3e65-4ef1-9291-cd0e647ab37a-images\") pod \"machine-api-operator-5694c8668f-f6rdr\" (UID: \"601d76ac-3e65-4ef1-9291-cd0e647ab37a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.817134 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/601d76ac-3e65-4ef1-9291-cd0e647ab37a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-f6rdr\" (UID: \"601d76ac-3e65-4ef1-9291-cd0e647ab37a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.817204 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.817278 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-encryption-config\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.817347 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjdkn\" (UniqueName: \"kubernetes.io/projected/e964e5a5-fe00-4d83-8416-2e2bd64c359d-kube-api-access-gjdkn\") pod \"route-controller-manager-6576b87f9c-n4srv\" (UID: \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.817663 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94a7fb98-0826-4559-9113-aad4415a7f21-serving-cert\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.817757 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.817837 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/064c5316-78a7-4320-a324-8ebe400e9db9-machine-approver-tls\") pod \"machine-approver-56656f9798-g9f7l\" (UID: \"064c5316-78a7-4320-a324-8ebe400e9db9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.817903 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.817980 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4hfq\" (UniqueName: \"kubernetes.io/projected/601d76ac-3e65-4ef1-9291-cd0e647ab37a-kube-api-access-z4hfq\") pod \"machine-api-operator-5694c8668f-f6rdr\" (UID: \"601d76ac-3e65-4ef1-9291-cd0e647ab37a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.818045 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e964e5a5-fe00-4d83-8416-2e2bd64c359d-serving-cert\") pod \"route-controller-manager-6576b87f9c-n4srv\" (UID: \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.818109 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45a57165-bc73-49db-abee-0af2b2f280e6-config\") pod \"console-operator-58897d9998-rqspp\" (UID: \"45a57165-bc73-49db-abee-0af2b2f280e6\") " pod="openshift-console-operator/console-operator-58897d9998-rqspp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.818178 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnnq4\" (UniqueName: \"kubernetes.io/projected/45a57165-bc73-49db-abee-0af2b2f280e6-kube-api-access-fnnq4\") pod \"console-operator-58897d9998-rqspp\" (UID: \"45a57165-bc73-49db-abee-0af2b2f280e6\") " pod="openshift-console-operator/console-operator-58897d9998-rqspp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.818246 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94a7fb98-0826-4559-9113-aad4415a7f21-audit-policies\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.818316 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c28e183-5341-482b-9104-4ca0b17d4f3c-audit-dir\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.818382 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.818450 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.818520 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg5vh\" (UniqueName: \"kubernetes.io/projected/6c28e183-5341-482b-9104-4ca0b17d4f3c-kube-api-access-qg5vh\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.818603 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45a57165-bc73-49db-abee-0af2b2f280e6-serving-cert\") pod \"console-operator-58897d9998-rqspp\" (UID: \"45a57165-bc73-49db-abee-0af2b2f280e6\") " pod="openshift-console-operator/console-operator-58897d9998-rqspp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.818675 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/064c5316-78a7-4320-a324-8ebe400e9db9-config\") pod \"machine-approver-56656f9798-g9f7l\" (UID: \"064c5316-78a7-4320-a324-8ebe400e9db9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.818740 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-service-ca\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.818812 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-client-ca\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.818882 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45a57165-bc73-49db-abee-0af2b2f280e6-trusted-ca\") pod \"console-operator-58897d9998-rqspp\" (UID: \"45a57165-bc73-49db-abee-0af2b2f280e6\") " pod="openshift-console-operator/console-operator-58897d9998-rqspp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.818943 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-oauth-serving-cert\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.819013 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zdm8w\" (UID: \"ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zdm8w" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.819088 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbxtk\" (UniqueName: \"kubernetes.io/projected/3d329678-3edc-4b70-9796-85c6ada120de-kube-api-access-tbxtk\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.813265 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cgrmn"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.812562 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.820195 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.820756 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.820319 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cgrmn" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.822697 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-7tzqp"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.822824 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.824901 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.825037 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.825570 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.830515 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.831159 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.833464 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.834091 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.835297 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.835978 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9z9vj"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.838026 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9z9vj" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.840604 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zbzmb"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.841216 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zbzmb" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.842117 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-45wvv"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.842489 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-45wvv" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.853079 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.855133 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gqskc"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.855626 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.862586 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.863127 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.865305 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r5clm"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.866022 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r5clm" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.867724 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.868350 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.869735 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-2d8lr"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.870268 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2d8lr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.871053 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-f6rdr"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.871235 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.872064 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-94mgp"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.873768 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cxxgb"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.874278 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-28xh9"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.875585 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.877651 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cgrmn"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.879004 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rcbfk"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.880111 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.881177 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.883717 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ztcpg"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.885655 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z7cdv"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.886746 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-82spz"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.891057 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.891882 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zdm8w"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.895103 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.895982 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rqspp"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.897569 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hc4hn"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.899187 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.900463 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpgbr"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.904647 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rflfs"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.904704 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.904755 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-45wvv"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.906751 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rhzdd"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.907807 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.908816 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.909824 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.911079 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.911149 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpr5m"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.911967 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25tf8"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.914079 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r5clm"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.915092 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ddnzw"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.916073 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9z9vj"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.917017 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zbzmb"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.918368 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rxmc7"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919393 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2d8lr"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919605 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-serving-cert\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919635 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94c481c5-c791-453a-a03e-2eb7a130c132-audit-dir\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919661 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/94c481c5-c791-453a-a03e-2eb7a130c132-node-pullsecrets\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919684 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-etcd-client\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919717 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919740 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-console-config\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919748 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94c481c5-c791-453a-a03e-2eb7a130c132-audit-dir\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919761 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94a7fb98-0826-4559-9113-aad4415a7f21-audit-dir\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919801 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq7s8\" (UniqueName: \"kubernetes.io/projected/d7841ce5-1f76-4df0-929d-c6573d4ff806-kube-api-access-xq7s8\") pod \"authentication-operator-69f744f599-94mgp\" (UID: \"d7841ce5-1f76-4df0-929d-c6573d4ff806\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919827 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd0807fd-8df9-432e-9189-fdbf743995f3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nqflc\" (UID: \"dd0807fd-8df9-432e-9189-fdbf743995f3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919835 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/94c481c5-c791-453a-a03e-2eb7a130c132-node-pullsecrets\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919803 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94a7fb98-0826-4559-9113-aad4415a7f21-audit-dir\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919847 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-audit\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919917 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-image-import-ca\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919938 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919964 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-audit-policies\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.919988 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrxvq\" (UniqueName: \"kubernetes.io/projected/b6f870fb-6a4d-4d9c-9990-83a9af347710-kube-api-access-jrxvq\") pod \"downloads-7954f5f757-ddnzw\" (UID: \"b6f870fb-6a4d-4d9c-9990-83a9af347710\") " pod="openshift-console/downloads-7954f5f757-ddnzw" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920009 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-etcd-serving-ca\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920029 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920049 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zdm8w\" (UID: \"ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zdm8w" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920067 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd0807fd-8df9-432e-9189-fdbf743995f3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nqflc\" (UID: \"dd0807fd-8df9-432e-9189-fdbf743995f3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920086 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67dnm\" (UniqueName: \"kubernetes.io/projected/acf9bac3-c5bc-4294-83f0-1e52c261baa3-kube-api-access-67dnm\") pod \"control-plane-machine-set-operator-78cbb6b69f-xpgbr\" (UID: \"acf9bac3-c5bc-4294-83f0-1e52c261baa3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpgbr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920104 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920122 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920153 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-config\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920172 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94a7fb98-0826-4559-9113-aad4415a7f21-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920189 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd0807fd-8df9-432e-9189-fdbf743995f3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nqflc\" (UID: \"dd0807fd-8df9-432e-9189-fdbf743995f3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920215 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7841ce5-1f76-4df0-929d-c6573d4ff806-config\") pod \"authentication-operator-69f744f599-94mgp\" (UID: \"d7841ce5-1f76-4df0-929d-c6573d4ff806\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920232 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r74jq\" (UniqueName: \"kubernetes.io/projected/23b53bb9-d2c4-4c05-8311-cdc55be68712-kube-api-access-r74jq\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpr5m\" (UID: \"23b53bb9-d2c4-4c05-8311-cdc55be68712\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpr5m" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920252 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx7qb\" (UniqueName: \"kubernetes.io/projected/064c5316-78a7-4320-a324-8ebe400e9db9-kube-api-access-tx7qb\") pod \"machine-approver-56656f9798-g9f7l\" (UID: \"064c5316-78a7-4320-a324-8ebe400e9db9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920270 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7841ce5-1f76-4df0-929d-c6573d4ff806-service-ca-bundle\") pod \"authentication-operator-69f744f599-94mgp\" (UID: \"d7841ce5-1f76-4df0-929d-c6573d4ff806\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920293 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23b53bb9-d2c4-4c05-8311-cdc55be68712-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpr5m\" (UID: \"23b53bb9-d2c4-4c05-8311-cdc55be68712\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpr5m" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920315 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ckpx\" (UniqueName: \"kubernetes.io/projected/725def92-8cb7-4ac0-8a1d-f72d03f8f7ca-kube-api-access-9ckpx\") pod \"openshift-config-operator-7777fb866f-4nrjr\" (UID: \"725def92-8cb7-4ac0-8a1d-f72d03f8f7ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920335 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3d329678-3edc-4b70-9796-85c6ada120de-console-oauth-config\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920361 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920377 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94a7fb98-0826-4559-9113-aad4415a7f21-etcd-client\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920395 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/601d76ac-3e65-4ef1-9291-cd0e647ab37a-images\") pod \"machine-api-operator-5694c8668f-f6rdr\" (UID: \"601d76ac-3e65-4ef1-9291-cd0e647ab37a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920411 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/601d76ac-3e65-4ef1-9291-cd0e647ab37a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-f6rdr\" (UID: \"601d76ac-3e65-4ef1-9291-cd0e647ab37a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920427 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920427 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920449 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-encryption-config\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920468 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frrr5\" (UniqueName: \"kubernetes.io/projected/94c481c5-c791-453a-a03e-2eb7a130c132-kube-api-access-frrr5\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920473 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-console-config\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920487 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjdkn\" (UniqueName: \"kubernetes.io/projected/e964e5a5-fe00-4d83-8416-2e2bd64c359d-kube-api-access-gjdkn\") pod \"route-controller-manager-6576b87f9c-n4srv\" (UID: \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920517 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94a7fb98-0826-4559-9113-aad4415a7f21-serving-cert\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920571 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920589 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/064c5316-78a7-4320-a324-8ebe400e9db9-machine-approver-tls\") pod \"machine-approver-56656f9798-g9f7l\" (UID: \"064c5316-78a7-4320-a324-8ebe400e9db9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920606 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920628 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b53bb9-d2c4-4c05-8311-cdc55be68712-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpr5m\" (UID: \"23b53bb9-d2c4-4c05-8311-cdc55be68712\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpr5m" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920645 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4hfq\" (UniqueName: \"kubernetes.io/projected/601d76ac-3e65-4ef1-9291-cd0e647ab37a-kube-api-access-z4hfq\") pod \"machine-api-operator-5694c8668f-f6rdr\" (UID: \"601d76ac-3e65-4ef1-9291-cd0e647ab37a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920661 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e964e5a5-fe00-4d83-8416-2e2bd64c359d-serving-cert\") pod \"route-controller-manager-6576b87f9c-n4srv\" (UID: \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920677 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v752\" (UniqueName: \"kubernetes.io/projected/dd0807fd-8df9-432e-9189-fdbf743995f3-kube-api-access-6v752\") pod \"cluster-image-registry-operator-dc59b4c8b-nqflc\" (UID: \"dd0807fd-8df9-432e-9189-fdbf743995f3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920694 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnnq4\" (UniqueName: \"kubernetes.io/projected/45a57165-bc73-49db-abee-0af2b2f280e6-kube-api-access-fnnq4\") pod \"console-operator-58897d9998-rqspp\" (UID: \"45a57165-bc73-49db-abee-0af2b2f280e6\") " pod="openshift-console-operator/console-operator-58897d9998-rqspp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920709 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94a7fb98-0826-4559-9113-aad4415a7f21-audit-policies\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920728 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c28e183-5341-482b-9104-4ca0b17d4f3c-audit-dir\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920744 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920759 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45a57165-bc73-49db-abee-0af2b2f280e6-config\") pod \"console-operator-58897d9998-rqspp\" (UID: \"45a57165-bc73-49db-abee-0af2b2f280e6\") " pod="openshift-console-operator/console-operator-58897d9998-rqspp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920772 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg5vh\" (UniqueName: \"kubernetes.io/projected/6c28e183-5341-482b-9104-4ca0b17d4f3c-kube-api-access-qg5vh\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920786 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45a57165-bc73-49db-abee-0af2b2f280e6-serving-cert\") pod \"console-operator-58897d9998-rqspp\" (UID: \"45a57165-bc73-49db-abee-0af2b2f280e6\") " pod="openshift-console-operator/console-operator-58897d9998-rqspp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920800 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/064c5316-78a7-4320-a324-8ebe400e9db9-config\") pod \"machine-approver-56656f9798-g9f7l\" (UID: \"064c5316-78a7-4320-a324-8ebe400e9db9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920815 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920831 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-service-ca\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920849 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-client-ca\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920864 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45a57165-bc73-49db-abee-0af2b2f280e6-trusted-ca\") pod \"console-operator-58897d9998-rqspp\" (UID: \"45a57165-bc73-49db-abee-0af2b2f280e6\") " pod="openshift-console-operator/console-operator-58897d9998-rqspp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920879 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-oauth-serving-cert\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920895 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zdm8w\" (UID: \"ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zdm8w" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920918 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbxtk\" (UniqueName: \"kubernetes.io/projected/3d329678-3edc-4b70-9796-85c6ada120de-kube-api-access-tbxtk\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920938 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj9qp\" (UniqueName: \"kubernetes.io/projected/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-kube-api-access-cj9qp\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920954 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7841ce5-1f76-4df0-929d-c6573d4ff806-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-94mgp\" (UID: \"d7841ce5-1f76-4df0-929d-c6573d4ff806\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920950 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-audit-policies\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920968 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-serving-cert\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920986 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/725def92-8cb7-4ac0-8a1d-f72d03f8f7ca-serving-cert\") pod \"openshift-config-operator-7777fb866f-4nrjr\" (UID: \"725def92-8cb7-4ac0-8a1d-f72d03f8f7ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.921001 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-trusted-ca-bundle\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.921729 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94a7fb98-0826-4559-9113-aad4415a7f21-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.921778 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e964e5a5-fe00-4d83-8416-2e2bd64c359d-config\") pod \"route-controller-manager-6576b87f9c-n4srv\" (UID: \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.921793 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7841ce5-1f76-4df0-929d-c6573d4ff806-config\") pod \"authentication-operator-69f744f599-94mgp\" (UID: \"d7841ce5-1f76-4df0-929d-c6573d4ff806\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.921825 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94a7fb98-0826-4559-9113-aad4415a7f21-encryption-config\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.922147 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c28e183-5341-482b-9104-4ca0b17d4f3c-audit-dir\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.923283 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94a7fb98-0826-4559-9113-aad4415a7f21-audit-policies\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.923491 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45a57165-bc73-49db-abee-0af2b2f280e6-trusted-ca\") pod \"console-operator-58897d9998-rqspp\" (UID: \"45a57165-bc73-49db-abee-0af2b2f280e6\") " pod="openshift-console-operator/console-operator-58897d9998-rqspp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.920468 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gqskc"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.923541 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.923574 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.923589 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-k5ktg"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.923978 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-oauth-serving-cert\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.924194 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dcqgf"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.924336 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-trusted-ca-bundle\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.924378 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7841ce5-1f76-4df0-929d-c6573d4ff806-service-ca-bundle\") pod \"authentication-operator-69f744f599-94mgp\" (UID: \"d7841ce5-1f76-4df0-929d-c6573d4ff806\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.924677 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-service-ca\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.924710 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/064c5316-78a7-4320-a324-8ebe400e9db9-auth-proxy-config\") pod \"machine-approver-56656f9798-g9f7l\" (UID: \"064c5316-78a7-4320-a324-8ebe400e9db9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.925089 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e964e5a5-fe00-4d83-8416-2e2bd64c359d-client-ca\") pod \"route-controller-manager-6576b87f9c-n4srv\" (UID: \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.925109 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94a7fb98-0826-4559-9113-aad4415a7f21-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.925128 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.925146 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/601d76ac-3e65-4ef1-9291-cd0e647ab37a-config\") pod \"machine-api-operator-5694c8668f-f6rdr\" (UID: \"601d76ac-3e65-4ef1-9291-cd0e647ab37a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.925153 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/064c5316-78a7-4320-a324-8ebe400e9db9-auth-proxy-config\") pod \"machine-approver-56656f9798-g9f7l\" (UID: \"064c5316-78a7-4320-a324-8ebe400e9db9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.925163 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/725def92-8cb7-4ac0-8a1d-f72d03f8f7ca-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4nrjr\" (UID: \"725def92-8cb7-4ac0-8a1d-f72d03f8f7ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.925181 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7841ce5-1f76-4df0-929d-c6573d4ff806-serving-cert\") pod \"authentication-operator-69f744f599-94mgp\" (UID: \"d7841ce5-1f76-4df0-929d-c6573d4ff806\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.925202 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d329678-3edc-4b70-9796-85c6ada120de-console-serving-cert\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.925218 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.925238 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/acf9bac3-c5bc-4294-83f0-1e52c261baa3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xpgbr\" (UID: \"acf9bac3-c5bc-4294-83f0-1e52c261baa3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpgbr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.925256 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-config\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.925010 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.925035 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-k5ktg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.925594 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6j2w\" (UniqueName: \"kubernetes.io/projected/94a7fb98-0826-4559-9113-aad4415a7f21-kube-api-access-x6j2w\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.925618 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xppq\" (UniqueName: \"kubernetes.io/projected/ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d-kube-api-access-9xppq\") pod \"openshift-apiserver-operator-796bbdcf4f-zdm8w\" (UID: \"ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zdm8w" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.926235 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e964e5a5-fe00-4d83-8416-2e2bd64c359d-config\") pod \"route-controller-manager-6576b87f9c-n4srv\" (UID: \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.926414 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zdm8w\" (UID: \"ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zdm8w" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.926578 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-k5ktg"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.926546 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94a7fb98-0826-4559-9113-aad4415a7f21-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.926805 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/725def92-8cb7-4ac0-8a1d-f72d03f8f7ca-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4nrjr\" (UID: \"725def92-8cb7-4ac0-8a1d-f72d03f8f7ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.927350 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zdm8w\" (UID: \"ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zdm8w" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.927454 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e964e5a5-fe00-4d83-8416-2e2bd64c359d-serving-cert\") pod \"route-controller-manager-6576b87f9c-n4srv\" (UID: \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.927450 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dcqgf"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.927676 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.927944 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.928016 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45a57165-bc73-49db-abee-0af2b2f280e6-config\") pod \"console-operator-58897d9998-rqspp\" (UID: \"45a57165-bc73-49db-abee-0af2b2f280e6\") " pod="openshift-console-operator/console-operator-58897d9998-rqspp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.928281 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e964e5a5-fe00-4d83-8416-2e2bd64c359d-client-ca\") pod \"route-controller-manager-6576b87f9c-n4srv\" (UID: \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.928298 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-lhs7j"] Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.928862 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lhs7j" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.928886 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94a7fb98-0826-4559-9113-aad4415a7f21-etcd-client\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.928973 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.929035 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/064c5316-78a7-4320-a324-8ebe400e9db9-config\") pod \"machine-approver-56656f9798-g9f7l\" (UID: \"064c5316-78a7-4320-a324-8ebe400e9db9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.929081 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3d329678-3edc-4b70-9796-85c6ada120de-console-oauth-config\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.929449 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.929704 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94a7fb98-0826-4559-9113-aad4415a7f21-encryption-config\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.930029 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.930076 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/725def92-8cb7-4ac0-8a1d-f72d03f8f7ca-serving-cert\") pod \"openshift-config-operator-7777fb866f-4nrjr\" (UID: \"725def92-8cb7-4ac0-8a1d-f72d03f8f7ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.931095 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.931744 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.931806 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45a57165-bc73-49db-abee-0af2b2f280e6-serving-cert\") pod \"console-operator-58897d9998-rqspp\" (UID: \"45a57165-bc73-49db-abee-0af2b2f280e6\") " pod="openshift-console-operator/console-operator-58897d9998-rqspp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.931811 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94a7fb98-0826-4559-9113-aad4415a7f21-serving-cert\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.932930 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7841ce5-1f76-4df0-929d-c6573d4ff806-serving-cert\") pod \"authentication-operator-69f744f599-94mgp\" (UID: \"d7841ce5-1f76-4df0-929d-c6573d4ff806\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.932980 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.933159 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7841ce5-1f76-4df0-929d-c6573d4ff806-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-94mgp\" (UID: \"d7841ce5-1f76-4df0-929d-c6573d4ff806\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.943264 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d329678-3edc-4b70-9796-85c6ada120de-console-serving-cert\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.944704 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.951779 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.972108 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 15:09:49 crc kubenswrapper[4835]: I0216 15:09:49.991795 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.011236 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.026425 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd0807fd-8df9-432e-9189-fdbf743995f3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nqflc\" (UID: \"dd0807fd-8df9-432e-9189-fdbf743995f3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.026478 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrxvq\" (UniqueName: \"kubernetes.io/projected/b6f870fb-6a4d-4d9c-9990-83a9af347710-kube-api-access-jrxvq\") pod \"downloads-7954f5f757-ddnzw\" (UID: \"b6f870fb-6a4d-4d9c-9990-83a9af347710\") " pod="openshift-console/downloads-7954f5f757-ddnzw" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.026509 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd0807fd-8df9-432e-9189-fdbf743995f3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nqflc\" (UID: \"dd0807fd-8df9-432e-9189-fdbf743995f3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.026553 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67dnm\" (UniqueName: \"kubernetes.io/projected/acf9bac3-c5bc-4294-83f0-1e52c261baa3-kube-api-access-67dnm\") pod \"control-plane-machine-set-operator-78cbb6b69f-xpgbr\" (UID: \"acf9bac3-c5bc-4294-83f0-1e52c261baa3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpgbr" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.026603 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd0807fd-8df9-432e-9189-fdbf743995f3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nqflc\" (UID: \"dd0807fd-8df9-432e-9189-fdbf743995f3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.026633 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r74jq\" (UniqueName: \"kubernetes.io/projected/23b53bb9-d2c4-4c05-8311-cdc55be68712-kube-api-access-r74jq\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpr5m\" (UID: \"23b53bb9-d2c4-4c05-8311-cdc55be68712\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpr5m" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.026671 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23b53bb9-d2c4-4c05-8311-cdc55be68712-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpr5m\" (UID: \"23b53bb9-d2c4-4c05-8311-cdc55be68712\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpr5m" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.026773 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b53bb9-d2c4-4c05-8311-cdc55be68712-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpr5m\" (UID: \"23b53bb9-d2c4-4c05-8311-cdc55be68712\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpr5m" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.026803 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v752\" (UniqueName: \"kubernetes.io/projected/dd0807fd-8df9-432e-9189-fdbf743995f3-kube-api-access-6v752\") pod \"cluster-image-registry-operator-dc59b4c8b-nqflc\" (UID: \"dd0807fd-8df9-432e-9189-fdbf743995f3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.026958 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/acf9bac3-c5bc-4294-83f0-1e52c261baa3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xpgbr\" (UID: \"acf9bac3-c5bc-4294-83f0-1e52c261baa3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpgbr" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.028973 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b53bb9-d2c4-4c05-8311-cdc55be68712-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpr5m\" (UID: \"23b53bb9-d2c4-4c05-8311-cdc55be68712\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpr5m" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.029499 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd0807fd-8df9-432e-9189-fdbf743995f3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nqflc\" (UID: \"dd0807fd-8df9-432e-9189-fdbf743995f3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.030565 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd0807fd-8df9-432e-9189-fdbf743995f3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nqflc\" (UID: \"dd0807fd-8df9-432e-9189-fdbf743995f3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.031570 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.031745 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23b53bb9-d2c4-4c05-8311-cdc55be68712-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpr5m\" (UID: \"23b53bb9-d2c4-4c05-8311-cdc55be68712\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpr5m" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.040588 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/acf9bac3-c5bc-4294-83f0-1e52c261baa3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xpgbr\" (UID: \"acf9bac3-c5bc-4294-83f0-1e52c261baa3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpgbr" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.051785 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.070760 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.092018 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.111423 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.132073 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.151279 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.171639 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.192448 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.211695 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.232131 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.252789 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.272286 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.293772 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.313548 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.331837 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.352703 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.373401 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.393078 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.412817 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.431837 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.451473 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.473172 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.493250 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.514102 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.573341 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.592561 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.612751 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.632834 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.652810 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.676802 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.692282 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.712211 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.733082 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.752986 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.772427 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.792412 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.813361 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.830338 4835 request.go:700] Waited for 1.0045368s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.832387 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.852964 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.872024 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.891746 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.912404 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.919961 4835 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.920090 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-etcd-client podName:94c481c5-c791-453a-a03e-2eb7a130c132 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:51.420059241 +0000 UTC m=+140.712052166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-etcd-client") pod "apiserver-76f77b778f-rcbfk" (UID: "94c481c5-c791-453a-a03e-2eb7a130c132") : failed to sync secret cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.920696 4835 secret.go:188] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.920803 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-serving-cert podName:94c481c5-c791-453a-a03e-2eb7a130c132 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:51.420773921 +0000 UTC m=+140.712766856 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-serving-cert") pod "apiserver-76f77b778f-rcbfk" (UID: "94c481c5-c791-453a-a03e-2eb7a130c132") : failed to sync secret cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.920801 4835 secret.go:188] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: failed to sync secret cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.920926 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-login podName:6c28e183-5341-482b-9104-4ca0b17d4f3c nodeName:}" failed. No retries permitted until 2026-02-16 15:09:51.420901764 +0000 UTC m=+140.712894689 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-login") pod "oauth-openshift-558db77b4-ztcpg" (UID: "6c28e183-5341-482b-9104-4ca0b17d4f3c") : failed to sync secret cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.921009 4835 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.921086 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-audit podName:94c481c5-c791-453a-a03e-2eb7a130c132 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:51.421067839 +0000 UTC m=+140.713060764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-audit") pod "apiserver-76f77b778f-rcbfk" (UID: "94c481c5-c791-453a-a03e-2eb7a130c132") : failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.921126 4835 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.921178 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-etcd-serving-ca podName:94c481c5-c791-453a-a03e-2eb7a130c132 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:51.421162652 +0000 UTC m=+140.713155577 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-etcd-serving-ca") pod "apiserver-76f77b778f-rcbfk" (UID: "94c481c5-c791-453a-a03e-2eb7a130c132") : failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.922214 4835 secret.go:188] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.922299 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/064c5316-78a7-4320-a324-8ebe400e9db9-machine-approver-tls podName:064c5316-78a7-4320-a324-8ebe400e9db9 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:51.422279272 +0000 UTC m=+140.714272197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/064c5316-78a7-4320-a324-8ebe400e9db9-machine-approver-tls") pod "machine-approver-56656f9798-g9f7l" (UID: "064c5316-78a7-4320-a324-8ebe400e9db9") : failed to sync secret cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.922330 4835 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.922465 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-config podName:94c481c5-c791-453a-a03e-2eb7a130c132 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:51.422445807 +0000 UTC m=+140.714438742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-config") pod "apiserver-76f77b778f-rcbfk" (UID: "94c481c5-c791-453a-a03e-2eb7a130c132") : failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926068 4835 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926093 4835 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926151 4835 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926097 4835 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926186 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-image-import-ca podName:94c481c5-c791-453a-a03e-2eb7a130c132 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:51.426150499 +0000 UTC m=+140.718143434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-image-import-ca") pod "apiserver-76f77b778f-rcbfk" (UID: "94c481c5-c791-453a-a03e-2eb7a130c132") : failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926226 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/601d76ac-3e65-4ef1-9291-cd0e647ab37a-images podName:601d76ac-3e65-4ef1-9291-cd0e647ab37a nodeName:}" failed. No retries permitted until 2026-02-16 15:09:51.426204211 +0000 UTC m=+140.718197146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/601d76ac-3e65-4ef1-9291-cd0e647ab37a-images") pod "machine-api-operator-5694c8668f-f6rdr" (UID: "601d76ac-3e65-4ef1-9291-cd0e647ab37a") : failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926252 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-config podName:41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:51.426240662 +0000 UTC m=+140.718233587 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-config") pod "controller-manager-879f6c89f-swn24" (UID: "41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27") : failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926275 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-trusted-ca-bundle podName:94c481c5-c791-453a-a03e-2eb7a130c132 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:51.426264532 +0000 UTC m=+140.718257467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-trusted-ca-bundle") pod "apiserver-76f77b778f-rcbfk" (UID: "94c481c5-c791-453a-a03e-2eb7a130c132") : failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926402 4835 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926480 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/601d76ac-3e65-4ef1-9291-cd0e647ab37a-config podName:601d76ac-3e65-4ef1-9291-cd0e647ab37a nodeName:}" failed. No retries permitted until 2026-02-16 15:09:51.426456828 +0000 UTC m=+140.718449763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/601d76ac-3e65-4ef1-9291-cd0e647ab37a-config") pod "machine-api-operator-5694c8668f-f6rdr" (UID: "601d76ac-3e65-4ef1-9291-cd0e647ab37a") : failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926560 4835 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926603 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-serving-cert podName:41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:51.426590691 +0000 UTC m=+140.718583616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-serving-cert") pod "controller-manager-879f6c89f-swn24" (UID: "41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27") : failed to sync secret cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926660 4835 secret.go:188] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926701 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-encryption-config podName:94c481c5-c791-453a-a03e-2eb7a130c132 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:51.426690084 +0000 UTC m=+140.718683019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-encryption-config") pod "apiserver-76f77b778f-rcbfk" (UID: "94c481c5-c791-453a-a03e-2eb7a130c132") : failed to sync secret cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926740 4835 secret.go:188] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926804 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/601d76ac-3e65-4ef1-9291-cd0e647ab37a-machine-api-operator-tls podName:601d76ac-3e65-4ef1-9291-cd0e647ab37a nodeName:}" failed. No retries permitted until 2026-02-16 15:09:51.426777436 +0000 UTC m=+140.718770501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/601d76ac-3e65-4ef1-9291-cd0e647ab37a-machine-api-operator-tls") pod "machine-api-operator-5694c8668f-f6rdr" (UID: "601d76ac-3e65-4ef1-9291-cd0e647ab37a") : failed to sync secret cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926879 4835 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.926941 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-client-ca podName:41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:51.42692229 +0000 UTC m=+140.718915315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-client-ca") pod "controller-manager-879f6c89f-swn24" (UID: "41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27") : failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.928239 4835 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: E0216 15:09:50.928322 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-proxy-ca-bundles podName:41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:51.428303128 +0000 UTC m=+140.720296053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-proxy-ca-bundles") pod "controller-manager-879f6c89f-swn24" (UID: "41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27") : failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.934790 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.952149 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.972341 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 15:09:50 crc kubenswrapper[4835]: I0216 15:09:50.992231 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.012252 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.031920 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.051733 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.071992 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.092633 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.112887 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.131762 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.151709 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.171952 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.192758 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.211650 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.232240 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.252246 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.272593 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.306934 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.314204 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.332167 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.351706 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.371760 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.401607 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.411954 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.432728 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.451948 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.455183 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-client-ca\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.455450 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-serving-cert\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.455735 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/601d76ac-3e65-4ef1-9291-cd0e647ab37a-config\") pod \"machine-api-operator-5694c8668f-f6rdr\" (UID: \"601d76ac-3e65-4ef1-9291-cd0e647ab37a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.455928 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.456105 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-config\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.456309 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-etcd-client\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.456477 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-serving-cert\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.456753 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-audit\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.456941 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-image-import-ca\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.457100 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.457257 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-etcd-serving-ca\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.457466 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-config\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.457775 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-encryption-config\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.457952 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/601d76ac-3e65-4ef1-9291-cd0e647ab37a-images\") pod \"machine-api-operator-5694c8668f-f6rdr\" (UID: \"601d76ac-3e65-4ef1-9291-cd0e647ab37a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.458117 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/601d76ac-3e65-4ef1-9291-cd0e647ab37a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-f6rdr\" (UID: \"601d76ac-3e65-4ef1-9291-cd0e647ab37a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.458306 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.458453 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/064c5316-78a7-4320-a324-8ebe400e9db9-machine-approver-tls\") pod \"machine-approver-56656f9798-g9f7l\" (UID: \"064c5316-78a7-4320-a324-8ebe400e9db9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.471601 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.492013 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.511881 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.561425 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq7s8\" (UniqueName: \"kubernetes.io/projected/d7841ce5-1f76-4df0-929d-c6573d4ff806-kube-api-access-xq7s8\") pod \"authentication-operator-69f744f599-94mgp\" (UID: \"d7841ce5-1f76-4df0-929d-c6573d4ff806\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.580732 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjdkn\" (UniqueName: \"kubernetes.io/projected/e964e5a5-fe00-4d83-8416-2e2bd64c359d-kube-api-access-gjdkn\") pod \"route-controller-manager-6576b87f9c-n4srv\" (UID: \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.603479 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.603951 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnnq4\" (UniqueName: \"kubernetes.io/projected/45a57165-bc73-49db-abee-0af2b2f280e6-kube-api-access-fnnq4\") pod \"console-operator-58897d9998-rqspp\" (UID: \"45a57165-bc73-49db-abee-0af2b2f280e6\") " pod="openshift-console-operator/console-operator-58897d9998-rqspp" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.622381 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx7qb\" (UniqueName: \"kubernetes.io/projected/064c5316-78a7-4320-a324-8ebe400e9db9-kube-api-access-tx7qb\") pod \"machine-approver-56656f9798-g9f7l\" (UID: \"064c5316-78a7-4320-a324-8ebe400e9db9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.635330 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.642064 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ckpx\" (UniqueName: \"kubernetes.io/projected/725def92-8cb7-4ac0-8a1d-f72d03f8f7ca-kube-api-access-9ckpx\") pod \"openshift-config-operator-7777fb866f-4nrjr\" (UID: \"725def92-8cb7-4ac0-8a1d-f72d03f8f7ca\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.669653 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rqspp" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.672053 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.684435 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.692997 4835 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.712747 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.748652 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xppq\" (UniqueName: \"kubernetes.io/projected/ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d-kube-api-access-9xppq\") pod \"openshift-apiserver-operator-796bbdcf4f-zdm8w\" (UID: \"ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zdm8w" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.753146 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.772496 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.792627 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.815212 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.832584 4835 request.go:700] Waited for 1.906128745s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.868376 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-94mgp"] Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.869115 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv"] Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.870330 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbxtk\" (UniqueName: \"kubernetes.io/projected/3d329678-3edc-4b70-9796-85c6ada120de-kube-api-access-tbxtk\") pod \"console-f9d7485db-28xh9\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.915435 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg5vh\" (UniqueName: \"kubernetes.io/projected/6c28e183-5341-482b-9104-4ca0b17d4f3c-kube-api-access-qg5vh\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.924713 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.932963 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.935468 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6j2w\" (UniqueName: \"kubernetes.io/projected/94a7fb98-0826-4559-9113-aad4415a7f21-kube-api-access-x6j2w\") pod \"apiserver-7bbb656c7d-9hzvc\" (UID: \"94a7fb98-0826-4559-9113-aad4415a7f21\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.955642 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zdm8w" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.957204 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.976050 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.992695 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrxvq\" (UniqueName: \"kubernetes.io/projected/b6f870fb-6a4d-4d9c-9990-83a9af347710-kube-api-access-jrxvq\") pod \"downloads-7954f5f757-ddnzw\" (UID: \"b6f870fb-6a4d-4d9c-9990-83a9af347710\") " pod="openshift-console/downloads-7954f5f757-ddnzw" Feb 16 15:09:51 crc kubenswrapper[4835]: I0216 15:09:51.994966 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.019801 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ddnzw" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.040849 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r74jq\" (UniqueName: \"kubernetes.io/projected/23b53bb9-d2c4-4c05-8311-cdc55be68712-kube-api-access-r74jq\") pod \"openshift-controller-manager-operator-756b6f6bc6-lpr5m\" (UID: \"23b53bb9-d2c4-4c05-8311-cdc55be68712\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpr5m" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.047837 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v752\" (UniqueName: \"kubernetes.io/projected/dd0807fd-8df9-432e-9189-fdbf743995f3-kube-api-access-6v752\") pod \"cluster-image-registry-operator-dc59b4c8b-nqflc\" (UID: \"dd0807fd-8df9-432e-9189-fdbf743995f3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.069971 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd0807fd-8df9-432e-9189-fdbf743995f3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nqflc\" (UID: \"dd0807fd-8df9-432e-9189-fdbf743995f3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.085466 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpr5m" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.094664 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.113680 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.118759 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-config\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.127916 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr"] Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.149582 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rqspp"] Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.152072 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.169801 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zdm8w"] Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.170064 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.170369 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:52.670355611 +0000 UTC m=+141.962348506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.170581 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8445cc47-44b3-41c2-8dc0-41a05c56b6e2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-25tf8\" (UID: \"8445cc47-44b3-41c2-8dc0-41a05c56b6e2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25tf8" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.170624 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef-proxy-tls\") pod \"machine-config-operator-74547568cd-6xx6l\" (UID: \"980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.170698 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52c8913d-e3d1-4f98-b14d-06369bb56b95-config\") pod \"kube-controller-manager-operator-78b949d7b-cxxgb\" (UID: \"52c8913d-e3d1-4f98-b14d-06369bb56b95\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cxxgb" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.170732 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8cb8fe18-6040-4d23-a89b-e338df070e75-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.170776 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3e2d62a-b77c-4a06-bd55-6d835395a4be-serving-cert\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.171164 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfwbz\" (UniqueName: \"kubernetes.io/projected/a6c12c59-b21e-47cb-a55c-611be47e4039-kube-api-access-dfwbz\") pod \"dns-operator-744455d44c-rflfs\" (UID: \"a6c12c59-b21e-47cb-a55c-611be47e4039\") " pod="openshift-dns-operator/dns-operator-744455d44c-rflfs" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.171213 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f3e2d62a-b77c-4a06-bd55-6d835395a4be-etcd-client\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.171242 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jksbr\" (UniqueName: \"kubernetes.io/projected/7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278-kube-api-access-jksbr\") pod \"service-ca-9c57cc56f-z7cdv\" (UID: \"7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278\") " pod="openshift-service-ca/service-ca-9c57cc56f-z7cdv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.171371 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p595\" (UniqueName: \"kubernetes.io/projected/980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef-kube-api-access-8p595\") pod \"machine-config-operator-74547568cd-6xx6l\" (UID: \"980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.171452 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtfh7\" (UniqueName: \"kubernetes.io/projected/8445cc47-44b3-41c2-8dc0-41a05c56b6e2-kube-api-access-wtfh7\") pod \"cluster-samples-operator-665b6dd947-25tf8\" (UID: \"8445cc47-44b3-41c2-8dc0-41a05c56b6e2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25tf8" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.171496 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6c12c59-b21e-47cb-a55c-611be47e4039-metrics-tls\") pod \"dns-operator-744455d44c-rflfs\" (UID: \"a6c12c59-b21e-47cb-a55c-611be47e4039\") " pod="openshift-dns-operator/dns-operator-744455d44c-rflfs" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.171513 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52c8913d-e3d1-4f98-b14d-06369bb56b95-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cxxgb\" (UID: \"52c8913d-e3d1-4f98-b14d-06369bb56b95\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cxxgb" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.171545 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3e2d62a-b77c-4a06-bd55-6d835395a4be-etcd-service-ca\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.171562 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef-images\") pod \"machine-config-operator-74547568cd-6xx6l\" (UID: \"980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.171627 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8cb8fe18-6040-4d23-a89b-e338df070e75-trusted-ca\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.171646 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6xx6l\" (UID: \"980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.171692 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhrzz\" (UniqueName: \"kubernetes.io/projected/5c596d5a-6315-4790-950f-097c373225d2-kube-api-access-bhrzz\") pod \"migrator-59844c95c7-hc4hn\" (UID: \"5c596d5a-6315-4790-950f-097c373225d2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hc4hn" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.171710 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278-signing-cabundle\") pod \"service-ca-9c57cc56f-z7cdv\" (UID: \"7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278\") " pod="openshift-service-ca/service-ca-9c57cc56f-z7cdv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.171726 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8cb8fe18-6040-4d23-a89b-e338df070e75-registry-certificates\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.171781 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f3e2d62a-b77c-4a06-bd55-6d835395a4be-etcd-ca\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.172670 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-495nb\" (UniqueName: \"kubernetes.io/projected/f3e2d62a-b77c-4a06-bd55-6d835395a4be-kube-api-access-495nb\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.172722 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e2d62a-b77c-4a06-bd55-6d835395a4be-config\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.175354 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 15:09:52 crc kubenswrapper[4835]: W0216 15:09:52.175518 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45a57165_bc73_49db_abee_0af2b2f280e6.slice/crio-1d233f9f50cdf855774e1ea0c0dfd5a48336f68b5a845101cba7466b11b6632f WatchSource:0}: Error finding container 1d233f9f50cdf855774e1ea0c0dfd5a48336f68b5a845101cba7466b11b6632f: Status 404 returned error can't find the container with id 1d233f9f50cdf855774e1ea0c0dfd5a48336f68b5a845101cba7466b11b6632f Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.176633 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-client-ca\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.176683 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-registry-tls\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.176840 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52c8913d-e3d1-4f98-b14d-06369bb56b95-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cxxgb\" (UID: \"52c8913d-e3d1-4f98-b14d-06369bb56b95\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cxxgb" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.178335 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8cb8fe18-6040-4d23-a89b-e338df070e75-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.178392 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-bound-sa-token\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.178430 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278-signing-key\") pod \"service-ca-9c57cc56f-z7cdv\" (UID: \"7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278\") " pod="openshift-service-ca/service-ca-9c57cc56f-z7cdv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.185598 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8djdq\" (UniqueName: \"kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-kube-api-access-8djdq\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: W0216 15:09:52.187299 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded5b75f8_a0ff_4b92_8eb9_7ad9004a273d.slice/crio-237e1529c6fdf512b0d0c0cc1efa5dd91855af4995c3b82451d42f83ebc1946b WatchSource:0}: Error finding container 237e1529c6fdf512b0d0c0cc1efa5dd91855af4995c3b82451d42f83ebc1946b: Status 404 returned error can't find the container with id 237e1529c6fdf512b0d0c0cc1efa5dd91855af4995c3b82451d42f83ebc1946b Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.192403 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.212756 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.217581 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-config\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.231198 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.237635 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-audit\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.251489 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.264083 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ddnzw"] Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.271927 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.282267 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/064c5316-78a7-4320-a324-8ebe400e9db9-machine-approver-tls\") pod \"machine-approver-56656f9798-g9f7l\" (UID: \"064c5316-78a7-4320-a324-8ebe400e9db9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.287883 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.288025 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:52.788008542 +0000 UTC m=+142.080001437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288064 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/18107f74-1f9f-4fbb-984a-b77b97c3d168-srv-cert\") pod \"catalog-operator-68c6474976-ljr7z\" (UID: \"18107f74-1f9f-4fbb-984a-b77b97c3d168\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288090 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvvwr\" (UniqueName: \"kubernetes.io/projected/754cd689-fe65-4acb-b2b4-854f89bf434e-kube-api-access-zvvwr\") pod \"machine-config-controller-84d6567774-wc9gv\" (UID: \"754cd689-fe65-4acb-b2b4-854f89bf434e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288106 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c80f4641-ca36-4c8f-b1ee-e3e0be705d7d-node-bootstrap-token\") pod \"machine-config-server-lhs7j\" (UID: \"c80f4641-ca36-4c8f-b1ee-e3e0be705d7d\") " pod="openshift-machine-config-operator/machine-config-server-lhs7j" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288148 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8djdq\" (UniqueName: \"kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-kube-api-access-8djdq\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288165 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59d881f5-1d23-49c7-8d84-71231e638736-service-ca-bundle\") pod \"router-default-5444994796-7tzqp\" (UID: \"59d881f5-1d23-49c7-8d84-71231e638736\") " pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288182 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8445cc47-44b3-41c2-8dc0-41a05c56b6e2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-25tf8\" (UID: \"8445cc47-44b3-41c2-8dc0-41a05c56b6e2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25tf8" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288237 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef-proxy-tls\") pod \"machine-config-operator-74547568cd-6xx6l\" (UID: \"980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288257 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/aeddce00-4ffb-40ba-832b-a2d30aee4528-csi-data-dir\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288302 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aeddce00-4ffb-40ba-832b-a2d30aee4528-registration-dir\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288319 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s69h8\" (UniqueName: \"kubernetes.io/projected/18107f74-1f9f-4fbb-984a-b77b97c3d168-kube-api-access-s69h8\") pod \"catalog-operator-68c6474976-ljr7z\" (UID: \"18107f74-1f9f-4fbb-984a-b77b97c3d168\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288354 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3e2d62a-b77c-4a06-bd55-6d835395a4be-serving-cert\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288413 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/754cd689-fe65-4acb-b2b4-854f89bf434e-proxy-tls\") pod \"machine-config-controller-84d6567774-wc9gv\" (UID: \"754cd689-fe65-4acb-b2b4-854f89bf434e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288433 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkmjr\" (UniqueName: \"kubernetes.io/projected/4473b226-8416-4a3b-95f8-ecaf3adcd9ef-kube-api-access-gkmjr\") pod \"service-ca-operator-777779d784-45wvv\" (UID: \"4473b226-8416-4a3b-95f8-ecaf3adcd9ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-45wvv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288482 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfwbz\" (UniqueName: \"kubernetes.io/projected/a6c12c59-b21e-47cb-a55c-611be47e4039-kube-api-access-dfwbz\") pod \"dns-operator-744455d44c-rflfs\" (UID: \"a6c12c59-b21e-47cb-a55c-611be47e4039\") " pod="openshift-dns-operator/dns-operator-744455d44c-rflfs" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288516 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f3e2d62a-b77c-4a06-bd55-6d835395a4be-etcd-client\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288574 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5b3b3e08-fd66-4c64-8f9d-53d1b8560708-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vcfft\" (UID: \"5b3b3e08-fd66-4c64-8f9d-53d1b8560708\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288591 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ce46123-c5a3-48de-8dbd-aebdb8684cd5-metrics-tls\") pod \"ingress-operator-5b745b69d9-6qtn4\" (UID: \"4ce46123-c5a3-48de-8dbd-aebdb8684cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288628 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ce46123-c5a3-48de-8dbd-aebdb8684cd5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6qtn4\" (UID: \"4ce46123-c5a3-48de-8dbd-aebdb8684cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288648 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c21c247b-8282-4ea0-aaac-cd2908a9cfac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gqskc\" (UID: \"c21c247b-8282-4ea0-aaac-cd2908a9cfac\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288663 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/aeddce00-4ffb-40ba-832b-a2d30aee4528-plugins-dir\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288678 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/59d881f5-1d23-49c7-8d84-71231e638736-default-certificate\") pod \"router-default-5444994796-7tzqp\" (UID: \"59d881f5-1d23-49c7-8d84-71231e638736\") " pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288713 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c80f4641-ca36-4c8f-b1ee-e3e0be705d7d-certs\") pod \"machine-config-server-lhs7j\" (UID: \"c80f4641-ca36-4c8f-b1ee-e3e0be705d7d\") " pod="openshift-machine-config-operator/machine-config-server-lhs7j" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288741 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rhzdd\" (UID: \"acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rhzdd" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288789 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtfh7\" (UniqueName: \"kubernetes.io/projected/8445cc47-44b3-41c2-8dc0-41a05c56b6e2-kube-api-access-wtfh7\") pod \"cluster-samples-operator-665b6dd947-25tf8\" (UID: \"8445cc47-44b3-41c2-8dc0-41a05c56b6e2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25tf8" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288808 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aeddce00-4ffb-40ba-832b-a2d30aee4528-socket-dir\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288824 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37078ebd-3cc2-4e3e-9704-026d66636bfd-config-volume\") pod \"dns-default-2d8lr\" (UID: \"37078ebd-3cc2-4e3e-9704-026d66636bfd\") " pod="openshift-dns/dns-default-2d8lr" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288840 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqtkc\" (UniqueName: \"kubernetes.io/projected/aeddce00-4ffb-40ba-832b-a2d30aee4528-kube-api-access-pqtkc\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288913 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/59d881f5-1d23-49c7-8d84-71231e638736-stats-auth\") pod \"router-default-5444994796-7tzqp\" (UID: \"59d881f5-1d23-49c7-8d84-71231e638736\") " pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288959 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3e2d62a-b77c-4a06-bd55-6d835395a4be-etcd-service-ca\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288974 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef-images\") pod \"machine-config-operator-74547568cd-6xx6l\" (UID: \"980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.288992 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8d2z\" (UniqueName: \"kubernetes.io/projected/569d3eef-2b86-44fb-90a1-2bceae4d2e09-kube-api-access-n8d2z\") pod \"collect-profiles-29520900-7j6lh\" (UID: \"569d3eef-2b86-44fb-90a1-2bceae4d2e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289030 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87kwh\" (UniqueName: \"kubernetes.io/projected/5e09e1ba-32be-4297-b408-6bdcd75c0478-kube-api-access-87kwh\") pod \"packageserver-d55dfcdfc-7qb6k\" (UID: \"5e09e1ba-32be-4297-b408-6bdcd75c0478\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289052 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhrzz\" (UniqueName: \"kubernetes.io/projected/5c596d5a-6315-4790-950f-097c373225d2-kube-api-access-bhrzz\") pod \"migrator-59844c95c7-hc4hn\" (UID: \"5c596d5a-6315-4790-950f-097c373225d2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hc4hn" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289069 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8cb8fe18-6040-4d23-a89b-e338df070e75-registry-certificates\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289104 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59d881f5-1d23-49c7-8d84-71231e638736-metrics-certs\") pod \"router-default-5444994796-7tzqp\" (UID: \"59d881f5-1d23-49c7-8d84-71231e638736\") " pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289122 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a7f76a-260e-4b0d-896f-5c40f3681665-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9z9vj\" (UID: \"87a7f76a-260e-4b0d-896f-5c40f3681665\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9z9vj" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289148 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-495nb\" (UniqueName: \"kubernetes.io/projected/f3e2d62a-b77c-4a06-bd55-6d835395a4be-kube-api-access-495nb\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289204 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/085d8584-110e-47bc-901f-0ec23623f09d-cert\") pod \"ingress-canary-k5ktg\" (UID: \"085d8584-110e-47bc-901f-0ec23623f09d\") " pod="openshift-ingress-canary/ingress-canary-k5ktg" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289228 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e2d62a-b77c-4a06-bd55-6d835395a4be-config\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289276 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5e09e1ba-32be-4297-b408-6bdcd75c0478-apiservice-cert\") pod \"packageserver-d55dfcdfc-7qb6k\" (UID: \"5e09e1ba-32be-4297-b408-6bdcd75c0478\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289294 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-registry-tls\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289320 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2znds\" (UniqueName: \"kubernetes.io/projected/4ce46123-c5a3-48de-8dbd-aebdb8684cd5-kube-api-access-2znds\") pod \"ingress-operator-5b745b69d9-6qtn4\" (UID: \"4ce46123-c5a3-48de-8dbd-aebdb8684cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289357 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5e09e1ba-32be-4297-b408-6bdcd75c0478-webhook-cert\") pod \"packageserver-d55dfcdfc-7qb6k\" (UID: \"5e09e1ba-32be-4297-b408-6bdcd75c0478\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289377 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21b81b99-f692-4fed-b053-ff1545ff9532-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cgrmn\" (UID: \"21b81b99-f692-4fed-b053-ff1545ff9532\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cgrmn" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289394 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4473b226-8416-4a3b-95f8-ecaf3adcd9ef-serving-cert\") pod \"service-ca-operator-777779d784-45wvv\" (UID: \"4473b226-8416-4a3b-95f8-ecaf3adcd9ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-45wvv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289433 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc256\" (UniqueName: \"kubernetes.io/projected/085d8584-110e-47bc-901f-0ec23623f09d-kube-api-access-gc256\") pod \"ingress-canary-k5ktg\" (UID: \"085d8584-110e-47bc-901f-0ec23623f09d\") " pod="openshift-ingress-canary/ingress-canary-k5ktg" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289478 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8cb8fe18-6040-4d23-a89b-e338df070e75-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289493 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-bound-sa-token\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289510 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278-signing-key\") pod \"service-ca-9c57cc56f-z7cdv\" (UID: \"7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278\") " pod="openshift-service-ca/service-ca-9c57cc56f-z7cdv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289541 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b81b99-f692-4fed-b053-ff1545ff9532-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cgrmn\" (UID: \"21b81b99-f692-4fed-b053-ff1545ff9532\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cgrmn" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289562 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5e09e1ba-32be-4297-b408-6bdcd75c0478-tmpfs\") pod \"packageserver-d55dfcdfc-7qb6k\" (UID: \"5e09e1ba-32be-4297-b408-6bdcd75c0478\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289595 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c21c247b-8282-4ea0-aaac-cd2908a9cfac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gqskc\" (UID: \"c21c247b-8282-4ea0-aaac-cd2908a9cfac\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289621 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbrhz\" (UniqueName: \"kubernetes.io/projected/ec89a24c-0f8d-46a3-9a45-3334a7b13c4c-kube-api-access-vbrhz\") pod \"package-server-manager-789f6589d5-r5clm\" (UID: \"ec89a24c-0f8d-46a3-9a45-3334a7b13c4c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r5clm" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289636 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/569d3eef-2b86-44fb-90a1-2bceae4d2e09-secret-volume\") pod \"collect-profiles-29520900-7j6lh\" (UID: \"569d3eef-2b86-44fb-90a1-2bceae4d2e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289668 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37078ebd-3cc2-4e3e-9704-026d66636bfd-metrics-tls\") pod \"dns-default-2d8lr\" (UID: \"37078ebd-3cc2-4e3e-9704-026d66636bfd\") " pod="openshift-dns/dns-default-2d8lr" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289682 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691-config\") pod \"kube-apiserver-operator-766d6c64bb-rhzdd\" (UID: \"acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rhzdd" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289700 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289717 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsdxg\" (UniqueName: \"kubernetes.io/projected/87a7f76a-260e-4b0d-896f-5c40f3681665-kube-api-access-wsdxg\") pod \"kube-storage-version-migrator-operator-b67b599dd-9z9vj\" (UID: \"87a7f76a-260e-4b0d-896f-5c40f3681665\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9z9vj" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289745 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52c8913d-e3d1-4f98-b14d-06369bb56b95-config\") pod \"kube-controller-manager-operator-78b949d7b-cxxgb\" (UID: \"52c8913d-e3d1-4f98-b14d-06369bb56b95\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cxxgb" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289765 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8cb8fe18-6040-4d23-a89b-e338df070e75-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289781 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gblg8\" (UniqueName: \"kubernetes.io/projected/59d881f5-1d23-49c7-8d84-71231e638736-kube-api-access-gblg8\") pod \"router-default-5444994796-7tzqp\" (UID: \"59d881f5-1d23-49c7-8d84-71231e638736\") " pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289795 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ce46123-c5a3-48de-8dbd-aebdb8684cd5-trusted-ca\") pod \"ingress-operator-5b745b69d9-6qtn4\" (UID: \"4ce46123-c5a3-48de-8dbd-aebdb8684cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289813 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jksbr\" (UniqueName: \"kubernetes.io/projected/7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278-kube-api-access-jksbr\") pod \"service-ca-9c57cc56f-z7cdv\" (UID: \"7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278\") " pod="openshift-service-ca/service-ca-9c57cc56f-z7cdv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289830 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e73cb380-2c32-4f3a-a14e-bc062553eb81-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zbzmb\" (UID: \"e73cb380-2c32-4f3a-a14e-bc062553eb81\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zbzmb" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289855 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5b3b3e08-fd66-4c64-8f9d-53d1b8560708-srv-cert\") pod \"olm-operator-6b444d44fb-vcfft\" (UID: \"5b3b3e08-fd66-4c64-8f9d-53d1b8560708\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289870 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4473b226-8416-4a3b-95f8-ecaf3adcd9ef-config\") pod \"service-ca-operator-777779d784-45wvv\" (UID: \"4473b226-8416-4a3b-95f8-ecaf3adcd9ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-45wvv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289895 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p595\" (UniqueName: \"kubernetes.io/projected/980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef-kube-api-access-8p595\") pod \"machine-config-operator-74547568cd-6xx6l\" (UID: \"980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289938 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6c12c59-b21e-47cb-a55c-611be47e4039-metrics-tls\") pod \"dns-operator-744455d44c-rflfs\" (UID: \"a6c12c59-b21e-47cb-a55c-611be47e4039\") " pod="openshift-dns-operator/dns-operator-744455d44c-rflfs" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289954 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52c8913d-e3d1-4f98-b14d-06369bb56b95-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cxxgb\" (UID: \"52c8913d-e3d1-4f98-b14d-06369bb56b95\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cxxgb" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289970 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8slsk\" (UniqueName: \"kubernetes.io/projected/e73cb380-2c32-4f3a-a14e-bc062553eb81-kube-api-access-8slsk\") pod \"multus-admission-controller-857f4d67dd-zbzmb\" (UID: \"e73cb380-2c32-4f3a-a14e-bc062553eb81\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zbzmb" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.289986 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/18107f74-1f9f-4fbb-984a-b77b97c3d168-profile-collector-cert\") pod \"catalog-operator-68c6474976-ljr7z\" (UID: \"18107f74-1f9f-4fbb-984a-b77b97c3d168\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.290024 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8cb8fe18-6040-4d23-a89b-e338df070e75-trusted-ca\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.290039 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6xx6l\" (UID: \"980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.290071 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278-signing-cabundle\") pod \"service-ca-9c57cc56f-z7cdv\" (UID: \"7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278\") " pod="openshift-service-ca/service-ca-9c57cc56f-z7cdv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.290113 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4s88\" (UniqueName: \"kubernetes.io/projected/c80f4641-ca36-4c8f-b1ee-e3e0be705d7d-kube-api-access-z4s88\") pod \"machine-config-server-lhs7j\" (UID: \"c80f4641-ca36-4c8f-b1ee-e3e0be705d7d\") " pod="openshift-machine-config-operator/machine-config-server-lhs7j" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.290156 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqxnw\" (UniqueName: \"kubernetes.io/projected/5b3b3e08-fd66-4c64-8f9d-53d1b8560708-kube-api-access-cqxnw\") pod \"olm-operator-6b444d44fb-vcfft\" (UID: \"5b3b3e08-fd66-4c64-8f9d-53d1b8560708\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.290196 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a7f76a-260e-4b0d-896f-5c40f3681665-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9z9vj\" (UID: \"87a7f76a-260e-4b0d-896f-5c40f3681665\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9z9vj" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.290214 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/569d3eef-2b86-44fb-90a1-2bceae4d2e09-config-volume\") pod \"collect-profiles-29520900-7j6lh\" (UID: \"569d3eef-2b86-44fb-90a1-2bceae4d2e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.290239 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f3e2d62a-b77c-4a06-bd55-6d835395a4be-etcd-ca\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.290277 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/754cd689-fe65-4acb-b2b4-854f89bf434e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wc9gv\" (UID: \"754cd689-fe65-4acb-b2b4-854f89bf434e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.290294 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec89a24c-0f8d-46a3-9a45-3334a7b13c4c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r5clm\" (UID: \"ec89a24c-0f8d-46a3-9a45-3334a7b13c4c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r5clm" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.290311 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8bcj\" (UniqueName: \"kubernetes.io/projected/37078ebd-3cc2-4e3e-9704-026d66636bfd-kube-api-access-n8bcj\") pod \"dns-default-2d8lr\" (UID: \"37078ebd-3cc2-4e3e-9704-026d66636bfd\") " pod="openshift-dns/dns-default-2d8lr" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.290347 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rhzdd\" (UID: \"acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rhzdd" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.290365 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21b81b99-f692-4fed-b053-ff1545ff9532-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cgrmn\" (UID: \"21b81b99-f692-4fed-b053-ff1545ff9532\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cgrmn" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.290382 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhw7\" (UniqueName: \"kubernetes.io/projected/c21c247b-8282-4ea0-aaac-cd2908a9cfac-kube-api-access-lqhw7\") pod \"marketplace-operator-79b997595-gqskc\" (UID: \"c21c247b-8282-4ea0-aaac-cd2908a9cfac\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.290447 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/aeddce00-4ffb-40ba-832b-a2d30aee4528-mountpoint-dir\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.290468 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52c8913d-e3d1-4f98-b14d-06369bb56b95-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cxxgb\" (UID: \"52c8913d-e3d1-4f98-b14d-06369bb56b95\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cxxgb" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.291624 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef-proxy-tls\") pod \"machine-config-operator-74547568cd-6xx6l\" (UID: \"980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.292164 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef-images\") pod \"machine-config-operator-74547568cd-6xx6l\" (UID: \"980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.292232 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3e2d62a-b77c-4a06-bd55-6d835395a4be-etcd-service-ca\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.292933 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52c8913d-e3d1-4f98-b14d-06369bb56b95-config\") pod \"kube-controller-manager-operator-78b949d7b-cxxgb\" (UID: \"52c8913d-e3d1-4f98-b14d-06369bb56b95\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cxxgb" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.293497 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.293613 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8cb8fe18-6040-4d23-a89b-e338df070e75-registry-certificates\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.294274 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6c12c59-b21e-47cb-a55c-611be47e4039-metrics-tls\") pod \"dns-operator-744455d44c-rflfs\" (UID: \"a6c12c59-b21e-47cb-a55c-611be47e4039\") " pod="openshift-dns-operator/dns-operator-744455d44c-rflfs" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.296679 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3e2d62a-b77c-4a06-bd55-6d835395a4be-serving-cert\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.297317 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278-signing-key\") pod \"service-ca-9c57cc56f-z7cdv\" (UID: \"7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278\") " pod="openshift-service-ca/service-ca-9c57cc56f-z7cdv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.297554 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-registry-tls\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.297682 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f3e2d62a-b77c-4a06-bd55-6d835395a4be-etcd-client\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.298276 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6xx6l\" (UID: \"980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.298475 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:52.798449189 +0000 UTC m=+142.090442084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.298549 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8cb8fe18-6040-4d23-a89b-e338df070e75-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.298808 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8cb8fe18-6040-4d23-a89b-e338df070e75-trusted-ca\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.298950 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52c8913d-e3d1-4f98-b14d-06369bb56b95-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-cxxgb\" (UID: \"52c8913d-e3d1-4f98-b14d-06369bb56b95\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cxxgb" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.299232 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/601d76ac-3e65-4ef1-9291-cd0e647ab37a-images\") pod \"machine-api-operator-5694c8668f-f6rdr\" (UID: \"601d76ac-3e65-4ef1-9291-cd0e647ab37a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.299288 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8445cc47-44b3-41c2-8dc0-41a05c56b6e2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-25tf8\" (UID: \"8445cc47-44b3-41c2-8dc0-41a05c56b6e2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25tf8" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.299860 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278-signing-cabundle\") pod \"service-ca-9c57cc56f-z7cdv\" (UID: \"7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278\") " pod="openshift-service-ca/service-ca-9c57cc56f-z7cdv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.300213 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f3e2d62a-b77c-4a06-bd55-6d835395a4be-etcd-ca\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.302711 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8cb8fe18-6040-4d23-a89b-e338df070e75-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.304035 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e2d62a-b77c-4a06-bd55-6d835395a4be-config\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.311344 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.313891 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.324309 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-etcd-client\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.330892 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpr5m"] Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.332238 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.341814 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/601d76ac-3e65-4ef1-9291-cd0e647ab37a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-f6rdr\" (UID: \"601d76ac-3e65-4ef1-9291-cd0e647ab37a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:52 crc kubenswrapper[4835]: W0216 15:09:52.345690 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23b53bb9_d2c4_4c05_8311_cdc55be68712.slice/crio-5fe5ab1ec2ffb6424bb836912470c1ff564e9b11f95a2f81ea2957e88153f869 WatchSource:0}: Error finding container 5fe5ab1ec2ffb6424bb836912470c1ff564e9b11f95a2f81ea2957e88153f869: Status 404 returned error can't find the container with id 5fe5ab1ec2ffb6424bb836912470c1ff564e9b11f95a2f81ea2957e88153f869 Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.355950 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.360049 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-serving-cert\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.372597 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.390771 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.390976 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8slsk\" (UniqueName: \"kubernetes.io/projected/e73cb380-2c32-4f3a-a14e-bc062553eb81-kube-api-access-8slsk\") pod \"multus-admission-controller-857f4d67dd-zbzmb\" (UID: \"e73cb380-2c32-4f3a-a14e-bc062553eb81\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zbzmb" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391000 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/18107f74-1f9f-4fbb-984a-b77b97c3d168-profile-collector-cert\") pod \"catalog-operator-68c6474976-ljr7z\" (UID: \"18107f74-1f9f-4fbb-984a-b77b97c3d168\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391019 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4s88\" (UniqueName: \"kubernetes.io/projected/c80f4641-ca36-4c8f-b1ee-e3e0be705d7d-kube-api-access-z4s88\") pod \"machine-config-server-lhs7j\" (UID: \"c80f4641-ca36-4c8f-b1ee-e3e0be705d7d\") " pod="openshift-machine-config-operator/machine-config-server-lhs7j" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391036 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqxnw\" (UniqueName: \"kubernetes.io/projected/5b3b3e08-fd66-4c64-8f9d-53d1b8560708-kube-api-access-cqxnw\") pod \"olm-operator-6b444d44fb-vcfft\" (UID: \"5b3b3e08-fd66-4c64-8f9d-53d1b8560708\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391053 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a7f76a-260e-4b0d-896f-5c40f3681665-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9z9vj\" (UID: \"87a7f76a-260e-4b0d-896f-5c40f3681665\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9z9vj" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391069 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/569d3eef-2b86-44fb-90a1-2bceae4d2e09-config-volume\") pod \"collect-profiles-29520900-7j6lh\" (UID: \"569d3eef-2b86-44fb-90a1-2bceae4d2e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391095 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/754cd689-fe65-4acb-b2b4-854f89bf434e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wc9gv\" (UID: \"754cd689-fe65-4acb-b2b4-854f89bf434e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391117 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec89a24c-0f8d-46a3-9a45-3334a7b13c4c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r5clm\" (UID: \"ec89a24c-0f8d-46a3-9a45-3334a7b13c4c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r5clm" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391136 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8bcj\" (UniqueName: \"kubernetes.io/projected/37078ebd-3cc2-4e3e-9704-026d66636bfd-kube-api-access-n8bcj\") pod \"dns-default-2d8lr\" (UID: \"37078ebd-3cc2-4e3e-9704-026d66636bfd\") " pod="openshift-dns/dns-default-2d8lr" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391153 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rhzdd\" (UID: \"acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rhzdd" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391169 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21b81b99-f692-4fed-b053-ff1545ff9532-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cgrmn\" (UID: \"21b81b99-f692-4fed-b053-ff1545ff9532\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cgrmn" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391297 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqhw7\" (UniqueName: \"kubernetes.io/projected/c21c247b-8282-4ea0-aaac-cd2908a9cfac-kube-api-access-lqhw7\") pod \"marketplace-operator-79b997595-gqskc\" (UID: \"c21c247b-8282-4ea0-aaac-cd2908a9cfac\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391337 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/aeddce00-4ffb-40ba-832b-a2d30aee4528-mountpoint-dir\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391379 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/18107f74-1f9f-4fbb-984a-b77b97c3d168-srv-cert\") pod \"catalog-operator-68c6474976-ljr7z\" (UID: \"18107f74-1f9f-4fbb-984a-b77b97c3d168\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391403 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvvwr\" (UniqueName: \"kubernetes.io/projected/754cd689-fe65-4acb-b2b4-854f89bf434e-kube-api-access-zvvwr\") pod \"machine-config-controller-84d6567774-wc9gv\" (UID: \"754cd689-fe65-4acb-b2b4-854f89bf434e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391427 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c80f4641-ca36-4c8f-b1ee-e3e0be705d7d-node-bootstrap-token\") pod \"machine-config-server-lhs7j\" (UID: \"c80f4641-ca36-4c8f-b1ee-e3e0be705d7d\") " pod="openshift-machine-config-operator/machine-config-server-lhs7j" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391461 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59d881f5-1d23-49c7-8d84-71231e638736-service-ca-bundle\") pod \"router-default-5444994796-7tzqp\" (UID: \"59d881f5-1d23-49c7-8d84-71231e638736\") " pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391487 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/aeddce00-4ffb-40ba-832b-a2d30aee4528-csi-data-dir\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391514 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aeddce00-4ffb-40ba-832b-a2d30aee4528-registration-dir\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391556 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s69h8\" (UniqueName: \"kubernetes.io/projected/18107f74-1f9f-4fbb-984a-b77b97c3d168-kube-api-access-s69h8\") pod \"catalog-operator-68c6474976-ljr7z\" (UID: \"18107f74-1f9f-4fbb-984a-b77b97c3d168\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391582 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkmjr\" (UniqueName: \"kubernetes.io/projected/4473b226-8416-4a3b-95f8-ecaf3adcd9ef-kube-api-access-gkmjr\") pod \"service-ca-operator-777779d784-45wvv\" (UID: \"4473b226-8416-4a3b-95f8-ecaf3adcd9ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-45wvv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391609 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/754cd689-fe65-4acb-b2b4-854f89bf434e-proxy-tls\") pod \"machine-config-controller-84d6567774-wc9gv\" (UID: \"754cd689-fe65-4acb-b2b4-854f89bf434e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391655 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5b3b3e08-fd66-4c64-8f9d-53d1b8560708-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vcfft\" (UID: \"5b3b3e08-fd66-4c64-8f9d-53d1b8560708\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391678 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ce46123-c5a3-48de-8dbd-aebdb8684cd5-metrics-tls\") pod \"ingress-operator-5b745b69d9-6qtn4\" (UID: \"4ce46123-c5a3-48de-8dbd-aebdb8684cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391702 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ce46123-c5a3-48de-8dbd-aebdb8684cd5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6qtn4\" (UID: \"4ce46123-c5a3-48de-8dbd-aebdb8684cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391726 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c21c247b-8282-4ea0-aaac-cd2908a9cfac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gqskc\" (UID: \"c21c247b-8282-4ea0-aaac-cd2908a9cfac\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391747 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/aeddce00-4ffb-40ba-832b-a2d30aee4528-plugins-dir\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391772 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/59d881f5-1d23-49c7-8d84-71231e638736-default-certificate\") pod \"router-default-5444994796-7tzqp\" (UID: \"59d881f5-1d23-49c7-8d84-71231e638736\") " pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391797 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c80f4641-ca36-4c8f-b1ee-e3e0be705d7d-certs\") pod \"machine-config-server-lhs7j\" (UID: \"c80f4641-ca36-4c8f-b1ee-e3e0be705d7d\") " pod="openshift-machine-config-operator/machine-config-server-lhs7j" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391834 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37078ebd-3cc2-4e3e-9704-026d66636bfd-config-volume\") pod \"dns-default-2d8lr\" (UID: \"37078ebd-3cc2-4e3e-9704-026d66636bfd\") " pod="openshift-dns/dns-default-2d8lr" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391857 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rhzdd\" (UID: \"acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rhzdd" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391905 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aeddce00-4ffb-40ba-832b-a2d30aee4528-socket-dir\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391929 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqtkc\" (UniqueName: \"kubernetes.io/projected/aeddce00-4ffb-40ba-832b-a2d30aee4528-kube-api-access-pqtkc\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391954 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/59d881f5-1d23-49c7-8d84-71231e638736-stats-auth\") pod \"router-default-5444994796-7tzqp\" (UID: \"59d881f5-1d23-49c7-8d84-71231e638736\") " pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.391986 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8d2z\" (UniqueName: \"kubernetes.io/projected/569d3eef-2b86-44fb-90a1-2bceae4d2e09-kube-api-access-n8d2z\") pod \"collect-profiles-29520900-7j6lh\" (UID: \"569d3eef-2b86-44fb-90a1-2bceae4d2e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392009 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87kwh\" (UniqueName: \"kubernetes.io/projected/5e09e1ba-32be-4297-b408-6bdcd75c0478-kube-api-access-87kwh\") pod \"packageserver-d55dfcdfc-7qb6k\" (UID: \"5e09e1ba-32be-4297-b408-6bdcd75c0478\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392041 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59d881f5-1d23-49c7-8d84-71231e638736-metrics-certs\") pod \"router-default-5444994796-7tzqp\" (UID: \"59d881f5-1d23-49c7-8d84-71231e638736\") " pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392063 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a7f76a-260e-4b0d-896f-5c40f3681665-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9z9vj\" (UID: \"87a7f76a-260e-4b0d-896f-5c40f3681665\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9z9vj" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392095 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/085d8584-110e-47bc-901f-0ec23623f09d-cert\") pod \"ingress-canary-k5ktg\" (UID: \"085d8584-110e-47bc-901f-0ec23623f09d\") " pod="openshift-ingress-canary/ingress-canary-k5ktg" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392094 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-serving-cert\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392118 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5e09e1ba-32be-4297-b408-6bdcd75c0478-apiservice-cert\") pod \"packageserver-d55dfcdfc-7qb6k\" (UID: \"5e09e1ba-32be-4297-b408-6bdcd75c0478\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392214 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2znds\" (UniqueName: \"kubernetes.io/projected/4ce46123-c5a3-48de-8dbd-aebdb8684cd5-kube-api-access-2znds\") pod \"ingress-operator-5b745b69d9-6qtn4\" (UID: \"4ce46123-c5a3-48de-8dbd-aebdb8684cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392239 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5e09e1ba-32be-4297-b408-6bdcd75c0478-webhook-cert\") pod \"packageserver-d55dfcdfc-7qb6k\" (UID: \"5e09e1ba-32be-4297-b408-6bdcd75c0478\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392267 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21b81b99-f692-4fed-b053-ff1545ff9532-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cgrmn\" (UID: \"21b81b99-f692-4fed-b053-ff1545ff9532\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cgrmn" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392291 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4473b226-8416-4a3b-95f8-ecaf3adcd9ef-serving-cert\") pod \"service-ca-operator-777779d784-45wvv\" (UID: \"4473b226-8416-4a3b-95f8-ecaf3adcd9ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-45wvv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392307 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc256\" (UniqueName: \"kubernetes.io/projected/085d8584-110e-47bc-901f-0ec23623f09d-kube-api-access-gc256\") pod \"ingress-canary-k5ktg\" (UID: \"085d8584-110e-47bc-901f-0ec23623f09d\") " pod="openshift-ingress-canary/ingress-canary-k5ktg" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392342 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b81b99-f692-4fed-b053-ff1545ff9532-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cgrmn\" (UID: \"21b81b99-f692-4fed-b053-ff1545ff9532\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cgrmn" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392396 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5e09e1ba-32be-4297-b408-6bdcd75c0478-tmpfs\") pod \"packageserver-d55dfcdfc-7qb6k\" (UID: \"5e09e1ba-32be-4297-b408-6bdcd75c0478\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392418 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c21c247b-8282-4ea0-aaac-cd2908a9cfac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gqskc\" (UID: \"c21c247b-8282-4ea0-aaac-cd2908a9cfac\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392466 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbrhz\" (UniqueName: \"kubernetes.io/projected/ec89a24c-0f8d-46a3-9a45-3334a7b13c4c-kube-api-access-vbrhz\") pod \"package-server-manager-789f6589d5-r5clm\" (UID: \"ec89a24c-0f8d-46a3-9a45-3334a7b13c4c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r5clm" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392499 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/569d3eef-2b86-44fb-90a1-2bceae4d2e09-secret-volume\") pod \"collect-profiles-29520900-7j6lh\" (UID: \"569d3eef-2b86-44fb-90a1-2bceae4d2e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392521 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691-config\") pod \"kube-apiserver-operator-766d6c64bb-rhzdd\" (UID: \"acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rhzdd" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392575 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37078ebd-3cc2-4e3e-9704-026d66636bfd-metrics-tls\") pod \"dns-default-2d8lr\" (UID: \"37078ebd-3cc2-4e3e-9704-026d66636bfd\") " pod="openshift-dns/dns-default-2d8lr" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392611 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsdxg\" (UniqueName: \"kubernetes.io/projected/87a7f76a-260e-4b0d-896f-5c40f3681665-kube-api-access-wsdxg\") pod \"kube-storage-version-migrator-operator-b67b599dd-9z9vj\" (UID: \"87a7f76a-260e-4b0d-896f-5c40f3681665\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9z9vj" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392661 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gblg8\" (UniqueName: \"kubernetes.io/projected/59d881f5-1d23-49c7-8d84-71231e638736-kube-api-access-gblg8\") pod \"router-default-5444994796-7tzqp\" (UID: \"59d881f5-1d23-49c7-8d84-71231e638736\") " pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392684 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ce46123-c5a3-48de-8dbd-aebdb8684cd5-trusted-ca\") pod \"ingress-operator-5b745b69d9-6qtn4\" (UID: \"4ce46123-c5a3-48de-8dbd-aebdb8684cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392712 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e73cb380-2c32-4f3a-a14e-bc062553eb81-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zbzmb\" (UID: \"e73cb380-2c32-4f3a-a14e-bc062553eb81\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zbzmb" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392731 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4473b226-8416-4a3b-95f8-ecaf3adcd9ef-config\") pod \"service-ca-operator-777779d784-45wvv\" (UID: \"4473b226-8416-4a3b-95f8-ecaf3adcd9ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-45wvv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.392758 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5b3b3e08-fd66-4c64-8f9d-53d1b8560708-srv-cert\") pod \"olm-operator-6b444d44fb-vcfft\" (UID: \"5b3b3e08-fd66-4c64-8f9d-53d1b8560708\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.393633 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.394141 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aeddce00-4ffb-40ba-832b-a2d30aee4528-registration-dir\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.394247 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:52.894230347 +0000 UTC m=+142.186223312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.394894 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5e09e1ba-32be-4297-b408-6bdcd75c0478-apiservice-cert\") pod \"packageserver-d55dfcdfc-7qb6k\" (UID: \"5e09e1ba-32be-4297-b408-6bdcd75c0478\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.397487 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5e09e1ba-32be-4297-b408-6bdcd75c0478-webhook-cert\") pod \"packageserver-d55dfcdfc-7qb6k\" (UID: \"5e09e1ba-32be-4297-b408-6bdcd75c0478\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.398461 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5b3b3e08-fd66-4c64-8f9d-53d1b8560708-srv-cert\") pod \"olm-operator-6b444d44fb-vcfft\" (UID: \"5b3b3e08-fd66-4c64-8f9d-53d1b8560708\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.398505 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/569d3eef-2b86-44fb-90a1-2bceae4d2e09-secret-volume\") pod \"collect-profiles-29520900-7j6lh\" (UID: \"569d3eef-2b86-44fb-90a1-2bceae4d2e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.399264 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691-config\") pod \"kube-apiserver-operator-766d6c64bb-rhzdd\" (UID: \"acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rhzdd" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.399953 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ce46123-c5a3-48de-8dbd-aebdb8684cd5-trusted-ca\") pod \"ingress-operator-5b745b69d9-6qtn4\" (UID: \"4ce46123-c5a3-48de-8dbd-aebdb8684cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.400667 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/aeddce00-4ffb-40ba-832b-a2d30aee4528-mountpoint-dir\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.401522 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37078ebd-3cc2-4e3e-9704-026d66636bfd-metrics-tls\") pod \"dns-default-2d8lr\" (UID: \"37078ebd-3cc2-4e3e-9704-026d66636bfd\") " pod="openshift-dns/dns-default-2d8lr" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.401655 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rhzdd\" (UID: \"acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rhzdd" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.401851 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ce46123-c5a3-48de-8dbd-aebdb8684cd5-metrics-tls\") pod \"ingress-operator-5b745b69d9-6qtn4\" (UID: \"4ce46123-c5a3-48de-8dbd-aebdb8684cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.402295 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4473b226-8416-4a3b-95f8-ecaf3adcd9ef-config\") pod \"service-ca-operator-777779d784-45wvv\" (UID: \"4473b226-8416-4a3b-95f8-ecaf3adcd9ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-45wvv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.402559 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e73cb380-2c32-4f3a-a14e-bc062553eb81-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zbzmb\" (UID: \"e73cb380-2c32-4f3a-a14e-bc062553eb81\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zbzmb" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.402898 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5e09e1ba-32be-4297-b408-6bdcd75c0478-tmpfs\") pod \"packageserver-d55dfcdfc-7qb6k\" (UID: \"5e09e1ba-32be-4297-b408-6bdcd75c0478\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.402919 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b81b99-f692-4fed-b053-ff1545ff9532-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cgrmn\" (UID: \"21b81b99-f692-4fed-b053-ff1545ff9532\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cgrmn" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.402976 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/aeddce00-4ffb-40ba-832b-a2d30aee4528-plugins-dir\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.402983 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c21c247b-8282-4ea0-aaac-cd2908a9cfac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gqskc\" (UID: \"c21c247b-8282-4ea0-aaac-cd2908a9cfac\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.403865 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21b81b99-f692-4fed-b053-ff1545ff9532-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cgrmn\" (UID: \"21b81b99-f692-4fed-b053-ff1545ff9532\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cgrmn" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.404847 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a7f76a-260e-4b0d-896f-5c40f3681665-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9z9vj\" (UID: \"87a7f76a-260e-4b0d-896f-5c40f3681665\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9z9vj" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.406029 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/59d881f5-1d23-49c7-8d84-71231e638736-default-certificate\") pod \"router-default-5444994796-7tzqp\" (UID: \"59d881f5-1d23-49c7-8d84-71231e638736\") " pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.406339 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c21c247b-8282-4ea0-aaac-cd2908a9cfac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gqskc\" (UID: \"c21c247b-8282-4ea0-aaac-cd2908a9cfac\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.406393 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4473b226-8416-4a3b-95f8-ecaf3adcd9ef-serving-cert\") pod \"service-ca-operator-777779d784-45wvv\" (UID: \"4473b226-8416-4a3b-95f8-ecaf3adcd9ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-45wvv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.406641 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/aeddce00-4ffb-40ba-832b-a2d30aee4528-csi-data-dir\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.406977 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/18107f74-1f9f-4fbb-984a-b77b97c3d168-profile-collector-cert\") pod \"catalog-operator-68c6474976-ljr7z\" (UID: \"18107f74-1f9f-4fbb-984a-b77b97c3d168\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.407013 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59d881f5-1d23-49c7-8d84-71231e638736-service-ca-bundle\") pod \"router-default-5444994796-7tzqp\" (UID: \"59d881f5-1d23-49c7-8d84-71231e638736\") " pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.407027 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aeddce00-4ffb-40ba-832b-a2d30aee4528-socket-dir\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.407512 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/18107f74-1f9f-4fbb-984a-b77b97c3d168-srv-cert\") pod \"catalog-operator-68c6474976-ljr7z\" (UID: \"18107f74-1f9f-4fbb-984a-b77b97c3d168\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.407522 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37078ebd-3cc2-4e3e-9704-026d66636bfd-config-volume\") pod \"dns-default-2d8lr\" (UID: \"37078ebd-3cc2-4e3e-9704-026d66636bfd\") " pod="openshift-dns/dns-default-2d8lr" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.407840 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c80f4641-ca36-4c8f-b1ee-e3e0be705d7d-node-bootstrap-token\") pod \"machine-config-server-lhs7j\" (UID: \"c80f4641-ca36-4c8f-b1ee-e3e0be705d7d\") " pod="openshift-machine-config-operator/machine-config-server-lhs7j" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.408122 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/754cd689-fe65-4acb-b2b4-854f89bf434e-proxy-tls\") pod \"machine-config-controller-84d6567774-wc9gv\" (UID: \"754cd689-fe65-4acb-b2b4-854f89bf434e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.408340 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/754cd689-fe65-4acb-b2b4-854f89bf434e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wc9gv\" (UID: \"754cd689-fe65-4acb-b2b4-854f89bf434e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.408494 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5b3b3e08-fd66-4c64-8f9d-53d1b8560708-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vcfft\" (UID: \"5b3b3e08-fd66-4c64-8f9d-53d1b8560708\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.408510 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c80f4641-ca36-4c8f-b1ee-e3e0be705d7d-certs\") pod \"machine-config-server-lhs7j\" (UID: \"c80f4641-ca36-4c8f-b1ee-e3e0be705d7d\") " pod="openshift-machine-config-operator/machine-config-server-lhs7j" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.408927 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a7f76a-260e-4b0d-896f-5c40f3681665-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9z9vj\" (UID: \"87a7f76a-260e-4b0d-896f-5c40f3681665\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9z9vj" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.409045 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/569d3eef-2b86-44fb-90a1-2bceae4d2e09-config-volume\") pod \"collect-profiles-29520900-7j6lh\" (UID: \"569d3eef-2b86-44fb-90a1-2bceae4d2e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.409173 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59d881f5-1d23-49c7-8d84-71231e638736-metrics-certs\") pod \"router-default-5444994796-7tzqp\" (UID: \"59d881f5-1d23-49c7-8d84-71231e638736\") " pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.410634 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec89a24c-0f8d-46a3-9a45-3334a7b13c4c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r5clm\" (UID: \"ec89a24c-0f8d-46a3-9a45-3334a7b13c4c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r5clm" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.411099 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/085d8584-110e-47bc-901f-0ec23623f09d-cert\") pod \"ingress-canary-k5ktg\" (UID: \"085d8584-110e-47bc-901f-0ec23623f09d\") " pod="openshift-ingress-canary/ingress-canary-k5ktg" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.411936 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.412670 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/59d881f5-1d23-49c7-8d84-71231e638736-stats-auth\") pod \"router-default-5444994796-7tzqp\" (UID: \"59d881f5-1d23-49c7-8d84-71231e638736\") " pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.428207 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-etcd-serving-ca\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.437240 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.448430 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.450379 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" event={"ID":"d7841ce5-1f76-4df0-929d-c6573d4ff806","Type":"ContainerStarted","Data":"c72af5fc006cad5b785a95747ec53daad3978488e0d9b53ef04303930ad49d9c"} Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.450423 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" event={"ID":"d7841ce5-1f76-4df0-929d-c6573d4ff806","Type":"ContainerStarted","Data":"38a0045957baa732db74b57638b41487734859cf5f5083e83f67d3fb0899324b"} Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.456013 4835 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.456073 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.456138 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/601d76ac-3e65-4ef1-9291-cd0e647ab37a-config podName:601d76ac-3e65-4ef1-9291-cd0e647ab37a nodeName:}" failed. No retries permitted until 2026-02-16 15:09:53.456093692 +0000 UTC m=+142.748086587 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/601d76ac-3e65-4ef1-9291-cd0e647ab37a-config") pod "machine-api-operator-5694c8668f-f6rdr" (UID: "601d76ac-3e65-4ef1-9291-cd0e647ab37a") : failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.456490 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpr5m" event={"ID":"23b53bb9-d2c4-4c05-8311-cdc55be68712","Type":"ContainerStarted","Data":"5fe5ab1ec2ffb6424bb836912470c1ff564e9b11f95a2f81ea2957e88153f869"} Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.457134 4835 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.457237 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-image-import-ca podName:94c481c5-c791-453a-a03e-2eb7a130c132 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:53.457210972 +0000 UTC m=+142.749204097 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-image-import-ca") pod "apiserver-76f77b778f-rcbfk" (UID: "94c481c5-c791-453a-a03e-2eb7a130c132") : failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.458032 4835 secret.go:188] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.458104 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-encryption-config podName:94c481c5-c791-453a-a03e-2eb7a130c132 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:53.458084986 +0000 UTC m=+142.750077881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-encryption-config") pod "apiserver-76f77b778f-rcbfk" (UID: "94c481c5-c791-453a-a03e-2eb7a130c132") : failed to sync secret cache: timed out waiting for the condition Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.459050 4835 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.459116 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-trusted-ca-bundle podName:94c481c5-c791-453a-a03e-2eb7a130c132 nodeName:}" failed. No retries permitted until 2026-02-16 15:09:53.459094624 +0000 UTC m=+142.751087719 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-trusted-ca-bundle") pod "apiserver-76f77b778f-rcbfk" (UID: "94c481c5-c791-453a-a03e-2eb7a130c132") : failed to sync configmap cache: timed out waiting for the condition Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.464135 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zdm8w" event={"ID":"ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d","Type":"ContainerStarted","Data":"e75795f7112fbb968584c6ad24e860287372277c60778b4be38fa59edd2b16fa"} Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.464161 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zdm8w" event={"ID":"ed5b75f8-a0ff-4b92-8eb9-7ad9004a273d","Type":"ContainerStarted","Data":"237e1529c6fdf512b0d0c0cc1efa5dd91855af4995c3b82451d42f83ebc1946b"} Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.471043 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-28xh9"] Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.471912 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.472921 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ztcpg\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.473883 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc"] Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.475017 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" event={"ID":"725def92-8cb7-4ac0-8a1d-f72d03f8f7ca","Type":"ContainerStarted","Data":"89b6b94d727d3e232d5eaa65f3745deb75fbd10b53a55502bbb951ef030939aa"} Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.475047 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" event={"ID":"725def92-8cb7-4ac0-8a1d-f72d03f8f7ca","Type":"ContainerStarted","Data":"f6b4ddef73d2c4f5d0e0428d4b752513ab8446eab3ef5d863d28f2ec27cbc193"} Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.477278 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" event={"ID":"e964e5a5-fe00-4d83-8416-2e2bd64c359d","Type":"ContainerStarted","Data":"b61f118f70c50f66754a126dee0ea5e57875b369b31a3605296cfeda141ac611"} Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.477306 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" event={"ID":"e964e5a5-fe00-4d83-8416-2e2bd64c359d","Type":"ContainerStarted","Data":"1e2ff8e9321719e029e3300a15adcd3e0ae1cdccaf5aeca8b4d8b904f2c4cfd2"} Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.477515 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.478894 4835 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-n4srv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.478927 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" podUID="e964e5a5-fe00-4d83-8416-2e2bd64c359d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.482014 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ddnzw" event={"ID":"b6f870fb-6a4d-4d9c-9990-83a9af347710","Type":"ContainerStarted","Data":"65eaa338e5e55448433a9aa1d1def44790264ec152c6a6171b9869a2ebc58c14"} Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.482059 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ddnzw" event={"ID":"b6f870fb-6a4d-4d9c-9990-83a9af347710","Type":"ContainerStarted","Data":"67216478926ef8327c49967a09f273b90dc41bba94912f0f0d826d2dfe28da17"} Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.482664 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-ddnzw" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.484886 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rqspp" event={"ID":"45a57165-bc73-49db-abee-0af2b2f280e6","Type":"ContainerStarted","Data":"01175f46e5454595e180648da8d4fcf90e35a854ed1155401f0c6eca5be26f70"} Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.484954 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rqspp" event={"ID":"45a57165-bc73-49db-abee-0af2b2f280e6","Type":"ContainerStarted","Data":"1d233f9f50cdf855774e1ea0c0dfd5a48336f68b5a845101cba7466b11b6632f"} Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.485311 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.485422 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-rqspp" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.486637 4835 patch_prober.go:28] interesting pod/console-operator-58897d9998-rqspp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.487017 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rqspp" podUID="45a57165-bc73-49db-abee-0af2b2f280e6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.487612 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddnzw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.487634 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ddnzw" podUID="b6f870fb-6a4d-4d9c-9990-83a9af347710" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.491515 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.501324 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.501872 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:53.001853572 +0000 UTC m=+142.293846457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:52 crc kubenswrapper[4835]: W0216 15:09:52.505748 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94a7fb98_0826_4559_9113_aad4415a7f21.slice/crio-614375360ab42e41a94c0c12260113938d5b1f9aedfed51e689836758734eb84 WatchSource:0}: Error finding container 614375360ab42e41a94c0c12260113938d5b1f9aedfed51e689836758734eb84: Status 404 returned error can't find the container with id 614375360ab42e41a94c0c12260113938d5b1f9aedfed51e689836758734eb84 Feb 16 15:09:52 crc kubenswrapper[4835]: W0216 15:09:52.507227 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d329678_3edc_4b70_9796_85c6ada120de.slice/crio-1a48093372db7728dc7fe2be114ed60de2f868a15910037967158578bcc1c509 WatchSource:0}: Error finding container 1a48093372db7728dc7fe2be114ed60de2f868a15910037967158578bcc1c509: Status 404 returned error can't find the container with id 1a48093372db7728dc7fe2be114ed60de2f868a15910037967158578bcc1c509 Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.512263 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.525361 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67dnm\" (UniqueName: \"kubernetes.io/projected/acf9bac3-c5bc-4294-83f0-1e52c261baa3-kube-api-access-67dnm\") pod \"control-plane-machine-set-operator-78cbb6b69f-xpgbr\" (UID: \"acf9bac3-c5bc-4294-83f0-1e52c261baa3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpgbr" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.535429 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4hfq\" (UniqueName: \"kubernetes.io/projected/601d76ac-3e65-4ef1-9291-cd0e647ab37a-kube-api-access-z4hfq\") pod \"machine-api-operator-5694c8668f-f6rdr\" (UID: \"601d76ac-3e65-4ef1-9291-cd0e647ab37a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.536223 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.556317 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc"] Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.558199 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.579474 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.584302 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frrr5\" (UniqueName: \"kubernetes.io/projected/94c481c5-c791-453a-a03e-2eb7a130c132-kube-api-access-frrr5\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.592187 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.602318 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.604017 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:53.103993726 +0000 UTC m=+142.395986621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.615486 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.624023 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj9qp\" (UniqueName: \"kubernetes.io/projected/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-kube-api-access-cj9qp\") pod \"controller-manager-879f6c89f-swn24\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.673474 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8djdq\" (UniqueName: \"kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-kube-api-access-8djdq\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.693139 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/52c8913d-e3d1-4f98-b14d-06369bb56b95-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-cxxgb\" (UID: \"52c8913d-e3d1-4f98-b14d-06369bb56b95\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cxxgb" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.698763 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpgbr" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.704275 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.704907 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:53.204890105 +0000 UTC m=+142.496883000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.705478 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-bound-sa-token\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.706466 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.729931 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhrzz\" (UniqueName: \"kubernetes.io/projected/5c596d5a-6315-4790-950f-097c373225d2-kube-api-access-bhrzz\") pod \"migrator-59844c95c7-hc4hn\" (UID: \"5c596d5a-6315-4790-950f-097c373225d2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hc4hn" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.731777 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.746097 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtfh7\" (UniqueName: \"kubernetes.io/projected/8445cc47-44b3-41c2-8dc0-41a05c56b6e2-kube-api-access-wtfh7\") pod \"cluster-samples-operator-665b6dd947-25tf8\" (UID: \"8445cc47-44b3-41c2-8dc0-41a05c56b6e2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25tf8" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.768991 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jksbr\" (UniqueName: \"kubernetes.io/projected/7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278-kube-api-access-jksbr\") pod \"service-ca-9c57cc56f-z7cdv\" (UID: \"7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278\") " pod="openshift-service-ca/service-ca-9c57cc56f-z7cdv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.789452 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-495nb\" (UniqueName: \"kubernetes.io/projected/f3e2d62a-b77c-4a06-bd55-6d835395a4be-kube-api-access-495nb\") pod \"etcd-operator-b45778765-82spz\" (UID: \"f3e2d62a-b77c-4a06-bd55-6d835395a4be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.805421 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.806005 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:53.305978699 +0000 UTC m=+142.597971674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.806161 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.806573 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:53.306521574 +0000 UTC m=+142.598514479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.821159 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfwbz\" (UniqueName: \"kubernetes.io/projected/a6c12c59-b21e-47cb-a55c-611be47e4039-kube-api-access-dfwbz\") pod \"dns-operator-744455d44c-rflfs\" (UID: \"a6c12c59-b21e-47cb-a55c-611be47e4039\") " pod="openshift-dns-operator/dns-operator-744455d44c-rflfs" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.830136 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p595\" (UniqueName: \"kubernetes.io/projected/980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef-kube-api-access-8p595\") pod \"machine-config-operator-74547568cd-6xx6l\" (UID: \"980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.864179 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbrhz\" (UniqueName: \"kubernetes.io/projected/ec89a24c-0f8d-46a3-9a45-3334a7b13c4c-kube-api-access-vbrhz\") pod \"package-server-manager-789f6589d5-r5clm\" (UID: \"ec89a24c-0f8d-46a3-9a45-3334a7b13c4c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r5clm" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.872637 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsdxg\" (UniqueName: \"kubernetes.io/projected/87a7f76a-260e-4b0d-896f-5c40f3681665-kube-api-access-wsdxg\") pod \"kube-storage-version-migrator-operator-b67b599dd-9z9vj\" (UID: \"87a7f76a-260e-4b0d-896f-5c40f3681665\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9z9vj" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.886056 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r5clm" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.886329 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2znds\" (UniqueName: \"kubernetes.io/projected/4ce46123-c5a3-48de-8dbd-aebdb8684cd5-kube-api-access-2znds\") pod \"ingress-operator-5b745b69d9-6qtn4\" (UID: \"4ce46123-c5a3-48de-8dbd-aebdb8684cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.907108 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.907174 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8slsk\" (UniqueName: \"kubernetes.io/projected/e73cb380-2c32-4f3a-a14e-bc062553eb81-kube-api-access-8slsk\") pod \"multus-admission-controller-857f4d67dd-zbzmb\" (UID: \"e73cb380-2c32-4f3a-a14e-bc062553eb81\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zbzmb" Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.907325 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:53.40729328 +0000 UTC m=+142.699286175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.907642 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:52 crc kubenswrapper[4835]: E0216 15:09:52.908561 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:53.408546015 +0000 UTC m=+142.700539000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.931688 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s69h8\" (UniqueName: \"kubernetes.io/projected/18107f74-1f9f-4fbb-984a-b77b97c3d168-kube-api-access-s69h8\") pod \"catalog-operator-68c6474976-ljr7z\" (UID: \"18107f74-1f9f-4fbb-984a-b77b97c3d168\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.949059 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkmjr\" (UniqueName: \"kubernetes.io/projected/4473b226-8416-4a3b-95f8-ecaf3adcd9ef-kube-api-access-gkmjr\") pod \"service-ca-operator-777779d784-45wvv\" (UID: \"4473b226-8416-4a3b-95f8-ecaf3adcd9ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-45wvv" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.961064 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rflfs" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.966711 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-swn24"] Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.970372 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25tf8" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.971904 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gblg8\" (UniqueName: \"kubernetes.io/projected/59d881f5-1d23-49c7-8d84-71231e638736-kube-api-access-gblg8\") pod \"router-default-5444994796-7tzqp\" (UID: \"59d881f5-1d23-49c7-8d84-71231e638736\") " pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.976106 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cxxgb" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.989483 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21b81b99-f692-4fed-b053-ff1545ff9532-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-cgrmn\" (UID: \"21b81b99-f692-4fed-b053-ff1545ff9532\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cgrmn" Feb 16 15:09:52 crc kubenswrapper[4835]: I0216 15:09:52.990998 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hc4hn" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.005611 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.010035 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:53 crc kubenswrapper[4835]: E0216 15:09:53.010794 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:53.510776201 +0000 UTC m=+142.802769096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.011424 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqhw7\" (UniqueName: \"kubernetes.io/projected/c21c247b-8282-4ea0-aaac-cd2908a9cfac-kube-api-access-lqhw7\") pod \"marketplace-operator-79b997595-gqskc\" (UID: \"c21c247b-8282-4ea0-aaac-cd2908a9cfac\") " pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.014333 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-z7cdv" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.023275 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.031422 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4s88\" (UniqueName: \"kubernetes.io/projected/c80f4641-ca36-4c8f-b1ee-e3e0be705d7d-kube-api-access-z4s88\") pod \"machine-config-server-lhs7j\" (UID: \"c80f4641-ca36-4c8f-b1ee-e3e0be705d7d\") " pod="openshift-machine-config-operator/machine-config-server-lhs7j" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.065204 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqxnw\" (UniqueName: \"kubernetes.io/projected/5b3b3e08-fd66-4c64-8f9d-53d1b8560708-kube-api-access-cqxnw\") pod \"olm-operator-6b444d44fb-vcfft\" (UID: \"5b3b3e08-fd66-4c64-8f9d-53d1b8560708\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.075001 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4ce46123-c5a3-48de-8dbd-aebdb8684cd5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6qtn4\" (UID: \"4ce46123-c5a3-48de-8dbd-aebdb8684cd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.092822 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8d2z\" (UniqueName: \"kubernetes.io/projected/569d3eef-2b86-44fb-90a1-2bceae4d2e09-kube-api-access-n8d2z\") pod \"collect-profiles-29520900-7j6lh\" (UID: \"569d3eef-2b86-44fb-90a1-2bceae4d2e09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.104304 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cgrmn" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.111467 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.112240 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:53 crc kubenswrapper[4835]: E0216 15:09:53.112554 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:53.612541994 +0000 UTC m=+142.904534889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.118419 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87kwh\" (UniqueName: \"kubernetes.io/projected/5e09e1ba-32be-4297-b408-6bdcd75c0478-kube-api-access-87kwh\") pod \"packageserver-d55dfcdfc-7qb6k\" (UID: \"5e09e1ba-32be-4297-b408-6bdcd75c0478\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.120165 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.137150 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.137302 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.140370 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvvwr\" (UniqueName: \"kubernetes.io/projected/754cd689-fe65-4acb-b2b4-854f89bf434e-kube-api-access-zvvwr\") pod \"machine-config-controller-84d6567774-wc9gv\" (UID: \"754cd689-fe65-4acb-b2b4-854f89bf434e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.148039 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9z9vj" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.154067 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zbzmb" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.164320 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-45wvv" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.169571 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r5clm"] Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.171212 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.174928 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rhzdd\" (UID: \"acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rhzdd" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.179913 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.183836 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc256\" (UniqueName: \"kubernetes.io/projected/085d8584-110e-47bc-901f-0ec23623f09d-kube-api-access-gc256\") pod \"ingress-canary-k5ktg\" (UID: \"085d8584-110e-47bc-901f-0ec23623f09d\") " pod="openshift-ingress-canary/ingress-canary-k5ktg" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.197369 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.214067 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:53 crc kubenswrapper[4835]: E0216 15:09:53.214494 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:53.714473152 +0000 UTC m=+143.006466047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.221210 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqtkc\" (UniqueName: \"kubernetes.io/projected/aeddce00-4ffb-40ba-832b-a2d30aee4528-kube-api-access-pqtkc\") pod \"csi-hostpathplugin-dcqgf\" (UID: \"aeddce00-4ffb-40ba-832b-a2d30aee4528\") " pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.229723 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8bcj\" (UniqueName: \"kubernetes.io/projected/37078ebd-3cc2-4e3e-9704-026d66636bfd-kube-api-access-n8bcj\") pod \"dns-default-2d8lr\" (UID: \"37078ebd-3cc2-4e3e-9704-026d66636bfd\") " pod="openshift-dns/dns-default-2d8lr" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.234225 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.244827 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-k5ktg" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.249894 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpgbr"] Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.251605 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lhs7j" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.277280 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ztcpg"] Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.323008 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:53 crc kubenswrapper[4835]: E0216 15:09:53.323484 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:53.823465344 +0000 UTC m=+143.115458239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.382406 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rhzdd" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.392924 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.427268 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:53 crc kubenswrapper[4835]: E0216 15:09:53.427594 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:53.927580442 +0000 UTC m=+143.219573337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.429140 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25tf8"] Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.496239 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rflfs"] Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.503705 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7tzqp" event={"ID":"59d881f5-1d23-49c7-8d84-71231e638736","Type":"ContainerStarted","Data":"92cb2612091b27651ec2682f0358e64b4db15dc8c07cd09ac8258d946e733e13"} Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.504597 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2d8lr" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.508588 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" event={"ID":"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27","Type":"ContainerStarted","Data":"fa8c8a61e4f61a4292a1c63f57132752a6f48c5889cef9c968819fe8bc9c9b12"} Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.508642 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" event={"ID":"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27","Type":"ContainerStarted","Data":"36bc4afbe20afde4928da465e732f4f937c5059c057d46157d7760dc2e973bc5"} Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.510755 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.514403 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-28xh9" event={"ID":"3d329678-3edc-4b70-9796-85c6ada120de","Type":"ContainerStarted","Data":"9091de00da22a965f525c5401aec4e9c53a484ca6cb25ff25378ff1bf2e32b4b"} Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.514472 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-28xh9" event={"ID":"3d329678-3edc-4b70-9796-85c6ada120de","Type":"ContainerStarted","Data":"1a48093372db7728dc7fe2be114ed60de2f868a15910037967158578bcc1c509"} Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.521619 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpgbr" event={"ID":"acf9bac3-c5bc-4294-83f0-1e52c261baa3","Type":"ContainerStarted","Data":"7d920fa202b20bdc8f42932b184ad3c26fc9618bf8b55292204c5ab976037cd3"} Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.525457 4835 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-swn24 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.525511 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" podUID="41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.528284 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/601d76ac-3e65-4ef1-9291-cd0e647ab37a-config\") pod \"machine-api-operator-5694c8668f-f6rdr\" (UID: \"601d76ac-3e65-4ef1-9291-cd0e647ab37a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.528332 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-image-import-ca\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.528355 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.528449 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-encryption-config\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.528474 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:53 crc kubenswrapper[4835]: E0216 15:09:53.529987 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:54.029967882 +0000 UTC m=+143.321960777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.530369 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.530412 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/94c481c5-c791-453a-a03e-2eb7a130c132-image-import-ca\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.531054 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/601d76ac-3e65-4ef1-9291-cd0e647ab37a-config\") pod \"machine-api-operator-5694c8668f-f6rdr\" (UID: \"601d76ac-3e65-4ef1-9291-cd0e647ab37a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.536143 4835 generic.go:334] "Generic (PLEG): container finished" podID="94a7fb98-0826-4559-9113-aad4415a7f21" containerID="1861f0b5dfc596af28b5f31bbd9133661091c034d2ce402fbe2bd11f7d2a8d75" exitCode=0 Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.536224 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" event={"ID":"94a7fb98-0826-4559-9113-aad4415a7f21","Type":"ContainerDied","Data":"1861f0b5dfc596af28b5f31bbd9133661091c034d2ce402fbe2bd11f7d2a8d75"} Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.536254 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" event={"ID":"94a7fb98-0826-4559-9113-aad4415a7f21","Type":"ContainerStarted","Data":"614375360ab42e41a94c0c12260113938d5b1f9aedfed51e689836758734eb84"} Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.536832 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94c481c5-c791-453a-a03e-2eb7a130c132-encryption-config\") pod \"apiserver-76f77b778f-rcbfk\" (UID: \"94c481c5-c791-453a-a03e-2eb7a130c132\") " pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.567745 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpr5m" event={"ID":"23b53bb9-d2c4-4c05-8311-cdc55be68712","Type":"ContainerStarted","Data":"07f80a75c49a0c4f14e42eafffa8ffca5876832675790c5b8bf7a0ab28a6a9cd"} Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.581752 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" event={"ID":"064c5316-78a7-4320-a324-8ebe400e9db9","Type":"ContainerStarted","Data":"f3f487860ca94f2fb361982afa97a6acd1c15b614f36aa7147fb7d72e995ebc5"} Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.581827 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" event={"ID":"064c5316-78a7-4320-a324-8ebe400e9db9","Type":"ContainerStarted","Data":"557c9b7578e861d0f7aabf6a3183e3d8fb88069f94ec0d1cdeddebbcfa772a38"} Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.581839 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" event={"ID":"064c5316-78a7-4320-a324-8ebe400e9db9","Type":"ContainerStarted","Data":"b6be746e5fcad80b1ed7f59b41dadb68503157bebd8ff4ae54607b84cec40471"} Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.591727 4835 generic.go:334] "Generic (PLEG): container finished" podID="725def92-8cb7-4ac0-8a1d-f72d03f8f7ca" containerID="89b6b94d727d3e232d5eaa65f3745deb75fbd10b53a55502bbb951ef030939aa" exitCode=0 Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.591849 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" event={"ID":"725def92-8cb7-4ac0-8a1d-f72d03f8f7ca","Type":"ContainerDied","Data":"89b6b94d727d3e232d5eaa65f3745deb75fbd10b53a55502bbb951ef030939aa"} Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.591890 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" event={"ID":"725def92-8cb7-4ac0-8a1d-f72d03f8f7ca","Type":"ContainerStarted","Data":"68f41f1b97561e039daeb5579a83891de6805591b07661235c5c2ce49eec6f95"} Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.594906 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.616132 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc" event={"ID":"dd0807fd-8df9-432e-9189-fdbf743995f3","Type":"ContainerStarted","Data":"4a47f773e121236a2179e1e8a2cb007e4482840fd08b3cad428772be39804b29"} Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.616181 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc" event={"ID":"dd0807fd-8df9-432e-9189-fdbf743995f3","Type":"ContainerStarted","Data":"a997f71f71379fab5f897dc4ed6f337972eb548af3d060681e723bd0af12e9a2"} Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.619933 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddnzw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.623889 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ddnzw" podUID="b6f870fb-6a4d-4d9c-9990-83a9af347710" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.630628 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.631332 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:09:53 crc kubenswrapper[4835]: E0216 15:09:53.644931 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:54.144872527 +0000 UTC m=+143.436865432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.650360 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.692081 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.734226 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:53 crc kubenswrapper[4835]: E0216 15:09:53.734548 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:54.234522237 +0000 UTC m=+143.526515132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.844920 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:53 crc kubenswrapper[4835]: E0216 15:09:53.845804 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:54.345762241 +0000 UTC m=+143.637755136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.936111 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-rqspp" Feb 16 15:09:53 crc kubenswrapper[4835]: I0216 15:09:53.951602 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:53 crc kubenswrapper[4835]: E0216 15:09:53.952010 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:54.451997947 +0000 UTC m=+143.743990842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.052660 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:54 crc kubenswrapper[4835]: E0216 15:09:54.053489 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:54.553470533 +0000 UTC m=+143.845463428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.157158 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:54 crc kubenswrapper[4835]: E0216 15:09:54.157768 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:54.657755445 +0000 UTC m=+143.949748340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.234547 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zdm8w" podStartSLOduration=123.234498249 podStartE2EDuration="2m3.234498249s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:54.195454044 +0000 UTC m=+143.487446939" watchObservedRunningTime="2026-02-16 15:09:54.234498249 +0000 UTC m=+143.526491144" Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.260556 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:54 crc kubenswrapper[4835]: E0216 15:09:54.261022 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:54.760992039 +0000 UTC m=+144.052984924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.348323 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-94mgp" podStartSLOduration=123.348303734 podStartE2EDuration="2m3.348303734s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:54.312360554 +0000 UTC m=+143.604353459" watchObservedRunningTime="2026-02-16 15:09:54.348303734 +0000 UTC m=+143.640296629" Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.368473 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:54 crc kubenswrapper[4835]: E0216 15:09:54.368866 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:54.86885353 +0000 UTC m=+144.160846425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.436766 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g9f7l" podStartSLOduration=123.43674289 podStartE2EDuration="2m3.43674289s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:54.431832005 +0000 UTC m=+143.723824910" watchObservedRunningTime="2026-02-16 15:09:54.43674289 +0000 UTC m=+143.728735805" Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.474727 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:54 crc kubenswrapper[4835]: E0216 15:09:54.474876 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:54.9748534 +0000 UTC m=+144.266846295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.475815 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:54 crc kubenswrapper[4835]: E0216 15:09:54.476251 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:54.976226078 +0000 UTC m=+144.268219153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.599490 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:54 crc kubenswrapper[4835]: E0216 15:09:54.599730 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:55.099717869 +0000 UTC m=+144.391710764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.632838 4835 csr.go:261] certificate signing request csr-7tqgn is approved, waiting to be issued Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.634878 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" event={"ID":"6c28e183-5341-482b-9104-4ca0b17d4f3c","Type":"ContainerStarted","Data":"d87d109699bfee101e2ad22ac261f8b21cfa82e32bc516d2b6d7c4a16c6353cf"} Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.635744 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-rqspp" podStartSLOduration=123.635729351 podStartE2EDuration="2m3.635729351s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:54.633039827 +0000 UTC m=+143.925032742" watchObservedRunningTime="2026-02-16 15:09:54.635729351 +0000 UTC m=+143.927722246" Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.642930 4835 csr.go:257] certificate signing request csr-7tqgn is issued Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.653091 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rflfs" event={"ID":"a6c12c59-b21e-47cb-a55c-611be47e4039","Type":"ContainerStarted","Data":"511800d734e2297dbe91282547e8f1785760fbeb7fcd60f023e7bc117705ed1f"} Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.663555 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-ddnzw" podStartSLOduration=123.663515797 podStartE2EDuration="2m3.663515797s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:54.655669411 +0000 UTC m=+143.947662316" watchObservedRunningTime="2026-02-16 15:09:54.663515797 +0000 UTC m=+143.955508692" Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.667924 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lhs7j" event={"ID":"c80f4641-ca36-4c8f-b1ee-e3e0be705d7d","Type":"ContainerStarted","Data":"c65445c1fb934ff55fddb019f3cee4ce0a8659a05ef822bf8ca5a03542d5418c"} Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.668079 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lhs7j" event={"ID":"c80f4641-ca36-4c8f-b1ee-e3e0be705d7d","Type":"ContainerStarted","Data":"d62d88b276369350dc46ae72402c50685512d857f9f9ab5b10a5e55d5885d288"} Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.684926 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpgbr" event={"ID":"acf9bac3-c5bc-4294-83f0-1e52c261baa3","Type":"ContainerStarted","Data":"780888d0c071e59cc59b3c0821f6c43a526fcd94d1dafd8f11b434bbefd54090"} Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.698692 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7tzqp" event={"ID":"59d881f5-1d23-49c7-8d84-71231e638736","Type":"ContainerStarted","Data":"95c26e8843893c2185d8c71d30231ad49461f41b095b013f246aa1756662e651"} Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.700889 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:54 crc kubenswrapper[4835]: E0216 15:09:54.703769 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:55.203744355 +0000 UTC m=+144.495737390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.776461 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-28xh9" podStartSLOduration=123.776434207 podStartE2EDuration="2m3.776434207s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:54.726245275 +0000 UTC m=+144.018238180" watchObservedRunningTime="2026-02-16 15:09:54.776434207 +0000 UTC m=+144.068427092" Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.810267 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.810784 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25tf8" event={"ID":"8445cc47-44b3-41c2-8dc0-41a05c56b6e2","Type":"ContainerStarted","Data":"d0cc4637c5d1c4a484b56fc6558c8f49911112e5fd48bac5f45e5d98a18de87e"} Feb 16 15:09:54 crc kubenswrapper[4835]: E0216 15:09:54.811684 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:55.311657627 +0000 UTC m=+144.603650522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.838023 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lpr5m" podStartSLOduration=122.838002493 podStartE2EDuration="2m2.838002493s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:54.833106918 +0000 UTC m=+144.125099813" watchObservedRunningTime="2026-02-16 15:09:54.838002493 +0000 UTC m=+144.129995388" Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.838764 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r5clm" event={"ID":"ec89a24c-0f8d-46a3-9a45-3334a7b13c4c","Type":"ContainerStarted","Data":"e89016d5988e8965aae112b5c440f1c5d0763e949218b63645b0a0d458d6738b"} Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.838812 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r5clm" event={"ID":"ec89a24c-0f8d-46a3-9a45-3334a7b13c4c","Type":"ContainerStarted","Data":"67b962de9e14f2ae7448c9cc611a967e48f5851edbd46aed847ae246c815e213"} Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.839976 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddnzw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.840018 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ddnzw" podUID="b6f870fb-6a4d-4d9c-9990-83a9af347710" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.911941 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.913815 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:54 crc kubenswrapper[4835]: E0216 15:09:54.944115 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:55.444096925 +0000 UTC m=+144.736089820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.941950 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" podStartSLOduration=123.941934686 podStartE2EDuration="2m3.941934686s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:54.893973175 +0000 UTC m=+144.185966070" watchObservedRunningTime="2026-02-16 15:09:54.941934686 +0000 UTC m=+144.233927581" Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.959732 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l"] Feb 16 15:09:54 crc kubenswrapper[4835]: I0216 15:09:54.989065 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9z9vj"] Feb 16 15:09:55 crc kubenswrapper[4835]: W0216 15:09:55.011864 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87a7f76a_260e_4b0d_896f_5c40f3681665.slice/crio-dbba88b64459d638faf5f4b32def74d064afea7feb0d8f8010ef2d2e1cf7eb15 WatchSource:0}: Error finding container dbba88b64459d638faf5f4b32def74d064afea7feb0d8f8010ef2d2e1cf7eb15: Status 404 returned error can't find the container with id dbba88b64459d638faf5f4b32def74d064afea7feb0d8f8010ef2d2e1cf7eb15 Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.014956 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:55 crc kubenswrapper[4835]: E0216 15:09:55.015261 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:55.515246615 +0000 UTC m=+144.807239510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.030332 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cxxgb"] Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.045776 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z7cdv"] Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.053522 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-82spz"] Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.070839 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" podStartSLOduration=123.070813046 podStartE2EDuration="2m3.070813046s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:55.055848214 +0000 UTC m=+144.347841109" watchObservedRunningTime="2026-02-16 15:09:55.070813046 +0000 UTC m=+144.362805941" Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.104933 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" podStartSLOduration=123.104918805 podStartE2EDuration="2m3.104918805s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:55.104066812 +0000 UTC m=+144.396059707" watchObservedRunningTime="2026-02-16 15:09:55.104918805 +0000 UTC m=+144.396911700" Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.112581 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.115959 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.116133 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cgrmn"] Feb 16 15:09:55 crc kubenswrapper[4835]: E0216 15:09:55.116269 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:55.616256368 +0000 UTC m=+144.908249253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.138426 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:09:55 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:09:55 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:09:55 crc kubenswrapper[4835]: healthz check failed Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.138479 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.182803 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zbzmb"] Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.204503 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hc4hn"] Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.231345 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:55 crc kubenswrapper[4835]: E0216 15:09:55.231817 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:55.73180265 +0000 UTC m=+145.023795545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.234217 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft"] Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.237724 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k"] Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.246185 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh"] Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.279056 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqflc" podStartSLOduration=123.279026731 podStartE2EDuration="2m3.279026731s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:55.277143799 +0000 UTC m=+144.569136694" watchObservedRunningTime="2026-02-16 15:09:55.279026731 +0000 UTC m=+144.571019626" Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.284928 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4"] Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.297146 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-k5ktg"] Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.333152 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:55 crc kubenswrapper[4835]: E0216 15:09:55.333446 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:55.83343596 +0000 UTC m=+145.125428855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.338911 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-45wvv"] Feb 16 15:09:55 crc kubenswrapper[4835]: W0216 15:09:55.350431 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod085d8584_110e_47bc_901f_0ec23623f09d.slice/crio-14d31f0c8bc16a2b4f967cef6804ee3a141a7ba7708ccbd5fa6aa60abdf3120f WatchSource:0}: Error finding container 14d31f0c8bc16a2b4f967cef6804ee3a141a7ba7708ccbd5fa6aa60abdf3120f: Status 404 returned error can't find the container with id 14d31f0c8bc16a2b4f967cef6804ee3a141a7ba7708ccbd5fa6aa60abdf3120f Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.399855 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv"] Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.410315 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpgbr" podStartSLOduration=123.410290006 podStartE2EDuration="2m3.410290006s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:55.388954749 +0000 UTC m=+144.680947644" watchObservedRunningTime="2026-02-16 15:09:55.410290006 +0000 UTC m=+144.702282901" Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.437422 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:55 crc kubenswrapper[4835]: E0216 15:09:55.440179 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:55.940149938 +0000 UTC m=+145.232142833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.442453 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:55 crc kubenswrapper[4835]: E0216 15:09:55.443311 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:55.943283705 +0000 UTC m=+145.235276600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.470601 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" podStartSLOduration=123.470574186 podStartE2EDuration="2m3.470574186s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:55.454122573 +0000 UTC m=+144.746115478" watchObservedRunningTime="2026-02-16 15:09:55.470574186 +0000 UTC m=+144.762567081" Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.500463 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dcqgf"] Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.519637 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z"] Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.521815 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-7tzqp" podStartSLOduration=123.521799137 podStartE2EDuration="2m3.521799137s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:55.516925103 +0000 UTC m=+144.808917998" watchObservedRunningTime="2026-02-16 15:09:55.521799137 +0000 UTC m=+144.813792032" Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.523638 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2d8lr"] Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.545280 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:55 crc kubenswrapper[4835]: E0216 15:09:55.546432 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:56.046414375 +0000 UTC m=+145.338407270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.568124 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rcbfk"] Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.583761 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-lhs7j" podStartSLOduration=6.583738204 podStartE2EDuration="6.583738204s" podCreationTimestamp="2026-02-16 15:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:55.536883303 +0000 UTC m=+144.828876198" watchObservedRunningTime="2026-02-16 15:09:55.583738204 +0000 UTC m=+144.875731099" Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.589289 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rhzdd"] Feb 16 15:09:55 crc kubenswrapper[4835]: W0216 15:09:55.603916 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94c481c5_c791_453a_a03e_2eb7a130c132.slice/crio-ba5465df07e9ef89cbb82a93473c3c81eb6d716b973f2291cb58e48391f4b6db WatchSource:0}: Error finding container ba5465df07e9ef89cbb82a93473c3c81eb6d716b973f2291cb58e48391f4b6db: Status 404 returned error can't find the container with id ba5465df07e9ef89cbb82a93473c3c81eb6d716b973f2291cb58e48391f4b6db Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.639923 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gqskc"] Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.642170 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r5clm" podStartSLOduration=123.642154183 podStartE2EDuration="2m3.642154183s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:55.638585784 +0000 UTC m=+144.930578699" watchObservedRunningTime="2026-02-16 15:09:55.642154183 +0000 UTC m=+144.934147078" Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.643637 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-f6rdr"] Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.651108 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:55 crc kubenswrapper[4835]: E0216 15:09:55.651462 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:56.151446679 +0000 UTC m=+145.443439574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.659702 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-16 15:04:54 +0000 UTC, rotation deadline is 2026-11-27 04:56:36.605618464 +0000 UTC Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.659759 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6805h46m40.945861766s for next certificate rotation Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.753139 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:55 crc kubenswrapper[4835]: E0216 15:09:55.753858 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:56.253842409 +0000 UTC m=+145.545835304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:55 crc kubenswrapper[4835]: W0216 15:09:55.762056 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc21c247b_8282_4ea0_aaac_cd2908a9cfac.slice/crio-1b224f7ea2a14f6dc1de18b0ce8166a477d153de76bf31a8ba12e08204be3c1a WatchSource:0}: Error finding container 1b224f7ea2a14f6dc1de18b0ce8166a477d153de76bf31a8ba12e08204be3c1a: Status 404 returned error can't find the container with id 1b224f7ea2a14f6dc1de18b0ce8166a477d153de76bf31a8ba12e08204be3c1a Feb 16 15:09:55 crc kubenswrapper[4835]: W0216 15:09:55.796763 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod601d76ac_3e65_4ef1_9291_cd0e647ab37a.slice/crio-5c2500d0e6539818a1a668530fec28e5a382aa119892d84d1fac3d60f907e127 WatchSource:0}: Error finding container 5c2500d0e6539818a1a668530fec28e5a382aa119892d84d1fac3d60f907e127: Status 404 returned error can't find the container with id 5c2500d0e6539818a1a668530fec28e5a382aa119892d84d1fac3d60f907e127 Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.854808 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:55 crc kubenswrapper[4835]: E0216 15:09:55.855114 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:56.355102908 +0000 UTC m=+145.647095803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.871464 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" event={"ID":"980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef","Type":"ContainerStarted","Data":"effbaa43932f07955bdb46ab9eda237b03c5932165b3825b90527589810be0db"} Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.871510 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" event={"ID":"980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef","Type":"ContainerStarted","Data":"b56c41e46bda1c919c916834543a13a0f0f265045bc669551012088d112708f9"} Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.872971 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft" event={"ID":"5b3b3e08-fd66-4c64-8f9d-53d1b8560708","Type":"ContainerStarted","Data":"78cea44fcc3d56ed88151a2b3e56a65334ab831ffc0623a80ae329b4079bf450"} Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.892796 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rhzdd" event={"ID":"acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691","Type":"ContainerStarted","Data":"be40c8b6fe0bafafe1c3957f62cf047ff6aec215e7053a571e90b7a79f31a3e8"} Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.897309 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" event={"ID":"f3e2d62a-b77c-4a06-bd55-6d835395a4be","Type":"ContainerStarted","Data":"56abe2d43e8e861d01b8683aba61f421384e763b71a5e6bf2e5f9cd268d43b4c"} Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.917838 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9z9vj" event={"ID":"87a7f76a-260e-4b0d-896f-5c40f3681665","Type":"ContainerStarted","Data":"00c70f33f7d50fdc5b81dd1cd2de8817aab744198a9e013a01f8f4db19a1562d"} Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.917880 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9z9vj" event={"ID":"87a7f76a-260e-4b0d-896f-5c40f3681665","Type":"ContainerStarted","Data":"dbba88b64459d638faf5f4b32def74d064afea7feb0d8f8010ef2d2e1cf7eb15"} Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.929872 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-z7cdv" event={"ID":"7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278","Type":"ContainerStarted","Data":"03880fcf585cee94d1bb27a5ab71722e817800f465c83f8c8c4f967ea466af04"} Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.929931 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-z7cdv" event={"ID":"7ff2e8c9-528f-4e5d-b2c0-fde7ce0be278","Type":"ContainerStarted","Data":"afdc909320d4761abb6d8b50cbadda64ec5173a07c0d2ca5c401a4ef42a79e09"} Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.941095 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r5clm" event={"ID":"ec89a24c-0f8d-46a3-9a45-3334a7b13c4c","Type":"ContainerStarted","Data":"358d0a572e10f0183d5a653583bbcc64d0ac8c455289bbbb72842f1e5d5128e6"} Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.941159 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r5clm" Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.941839 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-k5ktg" event={"ID":"085d8584-110e-47bc-901f-0ec23623f09d","Type":"ContainerStarted","Data":"14d31f0c8bc16a2b4f967cef6804ee3a141a7ba7708ccbd5fa6aa60abdf3120f"} Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.943908 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" event={"ID":"94c481c5-c791-453a-a03e-2eb7a130c132","Type":"ContainerStarted","Data":"ba5465df07e9ef89cbb82a93473c3c81eb6d716b973f2291cb58e48391f4b6db"} Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.945268 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9z9vj" podStartSLOduration=123.945249822 podStartE2EDuration="2m3.945249822s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:55.943396 +0000 UTC m=+145.235388885" watchObservedRunningTime="2026-02-16 15:09:55.945249822 +0000 UTC m=+145.237242717" Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.956902 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rflfs" event={"ID":"a6c12c59-b21e-47cb-a55c-611be47e4039","Type":"ContainerStarted","Data":"7597133c6e12859559656b15804fe8490fb1a9385dd14677b2d52b4a4a8bf8ea"} Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.956950 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rflfs" event={"ID":"a6c12c59-b21e-47cb-a55c-611be47e4039","Type":"ContainerStarted","Data":"711b4abf283024d1df3173150d76ce370df064400f2be8d1a47f242f9c81c089"} Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.957273 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:55 crc kubenswrapper[4835]: E0216 15:09:55.958679 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:56.458460295 +0000 UTC m=+145.750453190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.963607 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" event={"ID":"569d3eef-2b86-44fb-90a1-2bceae4d2e09","Type":"ContainerStarted","Data":"09d0aabba92f6623b3149a4a24558dd2a14e4f8003950252159b0378ecfb6597"} Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.991590 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25tf8" event={"ID":"8445cc47-44b3-41c2-8dc0-41a05c56b6e2","Type":"ContainerStarted","Data":"244e30ee0bba5a9b416931e208ebe3a6cffc54ba3776dce9349aa1b0370cc5d7"} Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.991811 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25tf8" event={"ID":"8445cc47-44b3-41c2-8dc0-41a05c56b6e2","Type":"ContainerStarted","Data":"cb4a49eb14a844071659824d941044002122ec3aa0b22938c7ae454177698634"} Feb 16 15:09:55 crc kubenswrapper[4835]: I0216 15:09:55.994782 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" event={"ID":"4ce46123-c5a3-48de-8dbd-aebdb8684cd5","Type":"ContainerStarted","Data":"c354d1943e5ab12b1dc43c938709dccbab18a79d5ed54bcd8a8739e90262d6f1"} Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.003797 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zbzmb" event={"ID":"e73cb380-2c32-4f3a-a14e-bc062553eb81","Type":"ContainerStarted","Data":"7120576941ac1768dc1ad66a9e15d6e01b50f99c2335717fa59d467978e7ee8c"} Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.010169 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cxxgb" event={"ID":"52c8913d-e3d1-4f98-b14d-06369bb56b95","Type":"ContainerStarted","Data":"c8bec94c63c96d72d41a0723f04f031ceee14d3d4d3c74e6f75a3ffc77a3446a"} Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.011204 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hc4hn" event={"ID":"5c596d5a-6315-4790-950f-097c373225d2","Type":"ContainerStarted","Data":"97efbc2f3775cd6f39863b2660fe231dc3307c0734656f9fcd6646d7c670f415"} Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.012346 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" event={"ID":"6c28e183-5341-482b-9104-4ca0b17d4f3c","Type":"ContainerStarted","Data":"3c4bfde6d5301666ec13f438f14b24539d421aeeb58e6714530bfcbc68e43492"} Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.013094 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.016021 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cgrmn" event={"ID":"21b81b99-f692-4fed-b053-ff1545ff9532","Type":"ContainerStarted","Data":"c7895a3ad9fd5318fa30f29708d2a501da0ffb84085ff1814b23fec0034d43fc"} Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.020091 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-45wvv" event={"ID":"4473b226-8416-4a3b-95f8-ecaf3adcd9ef","Type":"ContainerStarted","Data":"9573a0400156c55a51b34bd27b760b30244868831873578f9ddf70d89438af4f"} Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.031628 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-z7cdv" podStartSLOduration=124.0316094 podStartE2EDuration="2m4.0316094s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:55.972629366 +0000 UTC m=+145.264622261" watchObservedRunningTime="2026-02-16 15:09:56.0316094 +0000 UTC m=+145.323602295" Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.031909 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-rflfs" podStartSLOduration=124.031905328 podStartE2EDuration="2m4.031905328s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:56.027308652 +0000 UTC m=+145.319301547" watchObservedRunningTime="2026-02-16 15:09:56.031905328 +0000 UTC m=+145.323898213" Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.032091 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" event={"ID":"601d76ac-3e65-4ef1-9291-cd0e647ab37a","Type":"ContainerStarted","Data":"5c2500d0e6539818a1a668530fec28e5a382aa119892d84d1fac3d60f907e127"} Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.057340 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z" event={"ID":"18107f74-1f9f-4fbb-984a-b77b97c3d168","Type":"ContainerStarted","Data":"7be6d1b6e408dee8a000fd09b9cfd379827a93f646df4a021a5be13ee8430ce8"} Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.059660 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:56 crc kubenswrapper[4835]: E0216 15:09:56.061159 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:56.561147114 +0000 UTC m=+145.853140009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.078835 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" event={"ID":"5e09e1ba-32be-4297-b408-6bdcd75c0478","Type":"ContainerStarted","Data":"9a216839a12718c6add2014ec8a128a894043611b49588e9c9993c2faa54dbce"} Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.095550 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25tf8" podStartSLOduration=125.095519781 podStartE2EDuration="2m5.095519781s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:56.068518067 +0000 UTC m=+145.360510952" watchObservedRunningTime="2026-02-16 15:09:56.095519781 +0000 UTC m=+145.387512676" Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.099690 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" podStartSLOduration=125.099675985 podStartE2EDuration="2m5.099675985s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:56.094813001 +0000 UTC m=+145.386805896" watchObservedRunningTime="2026-02-16 15:09:56.099675985 +0000 UTC m=+145.391668890" Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.105512 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" event={"ID":"c21c247b-8282-4ea0-aaac-cd2908a9cfac","Type":"ContainerStarted","Data":"1b224f7ea2a14f6dc1de18b0ce8166a477d153de76bf31a8ba12e08204be3c1a"} Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.108371 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2d8lr" event={"ID":"37078ebd-3cc2-4e3e-9704-026d66636bfd","Type":"ContainerStarted","Data":"820883115a41391d52c19fa358f95fe84c43fe6e82dcb3d08d055d69a7510b3b"} Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.109397 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" event={"ID":"aeddce00-4ffb-40ba-832b-a2d30aee4528","Type":"ContainerStarted","Data":"ad9df2a2961b9613e3832147c495b643b75d6465d7db4cced0d405ac48973d51"} Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.111888 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" event={"ID":"94a7fb98-0826-4559-9113-aad4415a7f21","Type":"ContainerStarted","Data":"fc666c228c3a6d2c93c5bdeadf1b5811c388b1def12c506c6329599a76ecf47c"} Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.114614 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv" event={"ID":"754cd689-fe65-4acb-b2b4-854f89bf434e","Type":"ContainerStarted","Data":"6fe3ddf109b449b823b9bdddf57d6ea494403ddf2228d501a54dd93d9e6b6a19"} Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.127234 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4nrjr" Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.131022 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:09:56 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:09:56 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:09:56 crc kubenswrapper[4835]: healthz check failed Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.131061 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.163173 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:56 crc kubenswrapper[4835]: E0216 15:09:56.171636 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:56.671602477 +0000 UTC m=+145.963595372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.276247 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:56 crc kubenswrapper[4835]: E0216 15:09:56.276566 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:56.776554048 +0000 UTC m=+146.068546933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.377559 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:56 crc kubenswrapper[4835]: E0216 15:09:56.378186 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:56.878169067 +0000 UTC m=+146.170161962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.479606 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:56 crc kubenswrapper[4835]: E0216 15:09:56.480037 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:56.980020462 +0000 UTC m=+146.272013357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.584119 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:56 crc kubenswrapper[4835]: E0216 15:09:56.584593 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:57.084522861 +0000 UTC m=+146.376515756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.691612 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:56 crc kubenswrapper[4835]: E0216 15:09:56.693006 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:57.192988259 +0000 UTC m=+146.484981154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.795304 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:56 crc kubenswrapper[4835]: E0216 15:09:56.795813 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:57.29579038 +0000 UTC m=+146.587783275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.817364 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.900277 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:56 crc kubenswrapper[4835]: E0216 15:09:56.901231 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:57.401219375 +0000 UTC m=+146.693212260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.976357 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.976426 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:56 crc kubenswrapper[4835]: I0216 15:09:56.992574 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:57 crc kubenswrapper[4835]: E0216 15:09:57.003716 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:57.503701888 +0000 UTC m=+146.795694783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.003627 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.004004 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:57 crc kubenswrapper[4835]: E0216 15:09:57.004316 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:57.504308594 +0000 UTC m=+146.796301479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.107659 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:57 crc kubenswrapper[4835]: E0216 15:09:57.107898 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:57.607881797 +0000 UTC m=+146.899874692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.108210 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:57 crc kubenswrapper[4835]: E0216 15:09:57.108572 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:57.608563986 +0000 UTC m=+146.900556881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.120443 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:09:57 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:09:57 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:09:57 crc kubenswrapper[4835]: healthz check failed Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.120497 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.148959 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" event={"ID":"4ce46123-c5a3-48de-8dbd-aebdb8684cd5","Type":"ContainerStarted","Data":"862f88be8f2fec773f4417055ae7821ae7f2b0db9d16118cbcb9d8194f7d707a"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.149024 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" event={"ID":"4ce46123-c5a3-48de-8dbd-aebdb8684cd5","Type":"ContainerStarted","Data":"1c2560d8b242ec273fc19aa7439ef8cc2eb1b91c17d5060eeae5a45cc644a9cf"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.167366 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv" event={"ID":"754cd689-fe65-4acb-b2b4-854f89bf434e","Type":"ContainerStarted","Data":"821fe2279d7d1866239ba46ba7b9542ac9e3a3e63f4bc37f5c5b443c2e0df356"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.167418 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv" event={"ID":"754cd689-fe65-4acb-b2b4-854f89bf434e","Type":"ContainerStarted","Data":"0c45c97db109b85dd9db5646069eaca8fb40fbd8fe7148edab7e7ccbc98b1727"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.187989 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" event={"ID":"c21c247b-8282-4ea0-aaac-cd2908a9cfac","Type":"ContainerStarted","Data":"f46c23d6b5ff36a62ec6e9a31621759826f9dc9582f959c1f76c009fd68c405b"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.190438 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.190544 4835 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gqskc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.190585 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" podUID="c21c247b-8282-4ea0-aaac-cd2908a9cfac" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.204516 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" event={"ID":"f3e2d62a-b77c-4a06-bd55-6d835395a4be","Type":"ContainerStarted","Data":"1f78ef637cf55d3a36654b691c6562defbff1559567d125685aa792a22c2ca65"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.205099 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wc9gv" podStartSLOduration=125.205078805 podStartE2EDuration="2m5.205078805s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:57.201946758 +0000 UTC m=+146.493939653" watchObservedRunningTime="2026-02-16 15:09:57.205078805 +0000 UTC m=+146.497071700" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.209088 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:57 crc kubenswrapper[4835]: E0216 15:09:57.210648 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:57.710626767 +0000 UTC m=+147.002619662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.227781 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" event={"ID":"601d76ac-3e65-4ef1-9291-cd0e647ab37a","Type":"ContainerStarted","Data":"74f1ff6e503ed62b9ca91d252bdde1d763bbc78b7490a8fab79e3928450169a8"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.232871 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" podStartSLOduration=125.23284933 podStartE2EDuration="2m5.23284933s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:57.232437678 +0000 UTC m=+146.524430593" watchObservedRunningTime="2026-02-16 15:09:57.23284933 +0000 UTC m=+146.524842225" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.239104 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cgrmn" event={"ID":"21b81b99-f692-4fed-b053-ff1545ff9532","Type":"ContainerStarted","Data":"06eda2c27a3e627f18c196469c657db2d0b6436b44319519d4ffd014a25cd25c"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.253016 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" event={"ID":"569d3eef-2b86-44fb-90a1-2bceae4d2e09","Type":"ContainerStarted","Data":"a355cb939e11ca18ba891026d16bd4729e06b8f994c6d0417fba4ba1f4d02eb5"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.263491 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cxxgb" event={"ID":"52c8913d-e3d1-4f98-b14d-06369bb56b95","Type":"ContainerStarted","Data":"1f4c0f5a810b90b6a3451a584146da0c54a407f32603a504facf649f4395926f"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.269917 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" podStartSLOduration=125.26989802 podStartE2EDuration="2m5.26989802s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:57.268702637 +0000 UTC m=+146.560695532" watchObservedRunningTime="2026-02-16 15:09:57.26989802 +0000 UTC m=+146.561890915" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.290106 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" event={"ID":"980c7c3e-1fa8-4ca4-be83-9a15a0d7aaef","Type":"ContainerStarted","Data":"235ef6504df2540ac62060fc8a9a511ab188a4054f89b4d18d7376a441873971"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.303785 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-82spz" podStartSLOduration=125.303771853 podStartE2EDuration="2m5.303771853s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:57.301901192 +0000 UTC m=+146.593894087" watchObservedRunningTime="2026-02-16 15:09:57.303771853 +0000 UTC m=+146.595764748" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.312699 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:57 crc kubenswrapper[4835]: E0216 15:09:57.313470 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:57.81345469 +0000 UTC m=+147.105447585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.314146 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft" event={"ID":"5b3b3e08-fd66-4c64-8f9d-53d1b8560708","Type":"ContainerStarted","Data":"62cf7c1addc1f600b895fe5ca2955e1ffe9b4886afa65e2f2f0dd892d835130d"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.314187 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.323099 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zbzmb" event={"ID":"e73cb380-2c32-4f3a-a14e-bc062553eb81","Type":"ContainerStarted","Data":"b434d3ac5287a02f417b8f33bad79278d9ac8c3a6742324e61f06de38676d32d"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.324165 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2d8lr" event={"ID":"37078ebd-3cc2-4e3e-9704-026d66636bfd","Type":"ContainerStarted","Data":"f256a509eeb4e03ca9e68abdc0349907401af381acbe987ff72897bebb2c8295"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.332400 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-cxxgb" podStartSLOduration=125.332371481 podStartE2EDuration="2m5.332371481s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:57.329577254 +0000 UTC m=+146.621570159" watchObservedRunningTime="2026-02-16 15:09:57.332371481 +0000 UTC m=+146.624364376" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.343534 4835 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vcfft container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.343593 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft" podUID="5b3b3e08-fd66-4c64-8f9d-53d1b8560708" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.343826 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hc4hn" event={"ID":"5c596d5a-6315-4790-950f-097c373225d2","Type":"ContainerStarted","Data":"a36de463aa8d10430a49ed8f810971153a7289eb03d6f43249ff7be928a7bbcd"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.343888 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hc4hn" event={"ID":"5c596d5a-6315-4790-950f-097c373225d2","Type":"ContainerStarted","Data":"dfc24d22c7ee5a9029f17e73ee6fe99a2bc7bfaa1bcc7ec055811fe04cc3e852"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.355298 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-45wvv" event={"ID":"4473b226-8416-4a3b-95f8-ecaf3adcd9ef","Type":"ContainerStarted","Data":"869be23a0fef6efed66a9f063377c3f8212377f9786b8174e29c943a88f84346"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.356842 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" podStartSLOduration=126.356807994 podStartE2EDuration="2m6.356807994s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:57.356125135 +0000 UTC m=+146.648118030" watchObservedRunningTime="2026-02-16 15:09:57.356807994 +0000 UTC m=+146.648800889" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.367895 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-k5ktg" event={"ID":"085d8584-110e-47bc-901f-0ec23623f09d","Type":"ContainerStarted","Data":"9282716bd5cd69e0716e343c080d96b884cbcb480af2a1975e54d38cdf895c7c"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.417506 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:57 crc kubenswrapper[4835]: E0216 15:09:57.417631 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:57.917603659 +0000 UTC m=+147.209596554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.429029 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.431358 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" event={"ID":"5e09e1ba-32be-4297-b408-6bdcd75c0478","Type":"ContainerStarted","Data":"7092f1e392b383ebf924ea4a850141dd930567e7eaec761e2b156da53ae2b3e3"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.432307 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" Feb 16 15:09:57 crc kubenswrapper[4835]: E0216 15:09:57.433223 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:57.933202918 +0000 UTC m=+147.225195813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.441688 4835 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7qb6k container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.441780 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" podUID="5e09e1ba-32be-4297-b408-6bdcd75c0478" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.458705 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6xx6l" podStartSLOduration=125.4586902 podStartE2EDuration="2m5.4586902s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:57.457054485 +0000 UTC m=+146.749047380" watchObservedRunningTime="2026-02-16 15:09:57.4586902 +0000 UTC m=+146.750683095" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.476947 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z" event={"ID":"18107f74-1f9f-4fbb-984a-b77b97c3d168","Type":"ContainerStarted","Data":"3c02c87671c21927229b079899eb3459ac2adb8ccb44bd588c8bb203105b7fa9"} Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.489300 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9hzvc" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.500643 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-cgrmn" podStartSLOduration=125.500620006 podStartE2EDuration="2m5.500620006s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:57.499080523 +0000 UTC m=+146.791073418" watchObservedRunningTime="2026-02-16 15:09:57.500620006 +0000 UTC m=+146.792612901" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.536282 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:57 crc kubenswrapper[4835]: E0216 15:09:57.536500 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.036481823 +0000 UTC m=+147.328474718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.536693 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.537257 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hc4hn" podStartSLOduration=125.537230754 podStartE2EDuration="2m5.537230754s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:57.534359235 +0000 UTC m=+146.826352130" watchObservedRunningTime="2026-02-16 15:09:57.537230754 +0000 UTC m=+146.829223649" Feb 16 15:09:57 crc kubenswrapper[4835]: E0216 15:09:57.538806 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.038796427 +0000 UTC m=+147.330789322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.598169 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-k5ktg" podStartSLOduration=8.598153342 podStartE2EDuration="8.598153342s" podCreationTimestamp="2026-02-16 15:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:57.563551379 +0000 UTC m=+146.855544284" watchObservedRunningTime="2026-02-16 15:09:57.598153342 +0000 UTC m=+146.890146237" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.628854 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z" podStartSLOduration=125.628835177 podStartE2EDuration="2m5.628835177s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:57.598722238 +0000 UTC m=+146.890715133" watchObservedRunningTime="2026-02-16 15:09:57.628835177 +0000 UTC m=+146.920828072" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.637980 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:57 crc kubenswrapper[4835]: E0216 15:09:57.638828 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.138806562 +0000 UTC m=+147.430799457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.675824 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" podStartSLOduration=125.675801331 podStartE2EDuration="2m5.675801331s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:57.673870148 +0000 UTC m=+146.965863043" watchObservedRunningTime="2026-02-16 15:09:57.675801331 +0000 UTC m=+146.967794226" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.739437 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:57 crc kubenswrapper[4835]: E0216 15:09:57.739783 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.239771043 +0000 UTC m=+147.531763948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.774870 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft" podStartSLOduration=125.77485411 podStartE2EDuration="2m5.77485411s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:57.773118042 +0000 UTC m=+147.065110937" watchObservedRunningTime="2026-02-16 15:09:57.77485411 +0000 UTC m=+147.066847005" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.817869 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-45wvv" podStartSLOduration=125.817849494 podStartE2EDuration="2m5.817849494s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:57.815263163 +0000 UTC m=+147.107256058" watchObservedRunningTime="2026-02-16 15:09:57.817849494 +0000 UTC m=+147.109842389" Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.840401 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:57 crc kubenswrapper[4835]: E0216 15:09:57.840653 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.340613091 +0000 UTC m=+147.632605996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.841091 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:57 crc kubenswrapper[4835]: E0216 15:09:57.841590 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.341568877 +0000 UTC m=+147.633561772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.942022 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:57 crc kubenswrapper[4835]: E0216 15:09:57.942277 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.44224048 +0000 UTC m=+147.734233375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:57 crc kubenswrapper[4835]: I0216 15:09:57.942649 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:57 crc kubenswrapper[4835]: E0216 15:09:57.943095 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.443076754 +0000 UTC m=+147.735069649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.043746 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.043889 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.54386311 +0000 UTC m=+147.835856005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.044105 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.044565 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.544551449 +0000 UTC m=+147.836544344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.116985 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:09:58 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:09:58 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:09:58 crc kubenswrapper[4835]: healthz check failed Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.117061 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.144750 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.145014 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.644963735 +0000 UTC m=+147.936956630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.145155 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.145460 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.645443268 +0000 UTC m=+147.937436163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.247081 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.247268 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.747242712 +0000 UTC m=+148.039235607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.247650 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.247927 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.74791542 +0000 UTC m=+148.039908315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.348562 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.348730 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.848713167 +0000 UTC m=+148.140706062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.348808 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.349223 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.849213241 +0000 UTC m=+148.141206136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.450223 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.450389 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.950365147 +0000 UTC m=+148.242358042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.450474 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.450823 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:58.950809809 +0000 UTC m=+148.242802704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.484165 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zbzmb" event={"ID":"e73cb380-2c32-4f3a-a14e-bc062553eb81","Type":"ContainerStarted","Data":"c57a32e6ceaa12e13e6c8f0742ee9f2ec1a8109d8070f2398ce776897904ae45"} Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.486497 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rhzdd" event={"ID":"acdddb4f-12bd-4fc3-a1d6-3e5aa01fd691","Type":"ContainerStarted","Data":"05fdbdff3642b269e0ba6446616ca173794d8ec06b23d709c7f73add03d2c68c"} Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.488218 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-f6rdr" event={"ID":"601d76ac-3e65-4ef1-9291-cd0e647ab37a","Type":"ContainerStarted","Data":"959d34de4e37cd59677c426d36bf9bc711054e79bdffcbaa1912ce10561b75bd"} Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.490091 4835 generic.go:334] "Generic (PLEG): container finished" podID="569d3eef-2b86-44fb-90a1-2bceae4d2e09" containerID="a355cb939e11ca18ba891026d16bd4729e06b8f994c6d0417fba4ba1f4d02eb5" exitCode=0 Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.490141 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" event={"ID":"569d3eef-2b86-44fb-90a1-2bceae4d2e09","Type":"ContainerDied","Data":"a355cb939e11ca18ba891026d16bd4729e06b8f994c6d0417fba4ba1f4d02eb5"} Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.492404 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2d8lr" event={"ID":"37078ebd-3cc2-4e3e-9704-026d66636bfd","Type":"ContainerStarted","Data":"a1b672b7a28c5aaeae5aef2c9c60a005749dc645b723f56446742d5b13d499b9"} Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.492586 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-2d8lr" Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.494294 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" event={"ID":"aeddce00-4ffb-40ba-832b-a2d30aee4528","Type":"ContainerStarted","Data":"a92c9ccf592d4dba75d4ab407a55631613d3cc42df73f16518b43c68fc96f9e6"} Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.498178 4835 generic.go:334] "Generic (PLEG): container finished" podID="94c481c5-c791-453a-a03e-2eb7a130c132" containerID="703507c5b73201926240f87f3806a89ac665b292da80504b9cd982a5f8368faf" exitCode=0 Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.498249 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" event={"ID":"94c481c5-c791-453a-a03e-2eb7a130c132","Type":"ContainerDied","Data":"703507c5b73201926240f87f3806a89ac665b292da80504b9cd982a5f8368faf"} Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.498390 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" event={"ID":"94c481c5-c791-453a-a03e-2eb7a130c132","Type":"ContainerStarted","Data":"fc40d562ec67d3fc9651e3394a4ea7e47206cbf30ab17227845438e5b404b5df"} Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.498414 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" event={"ID":"94c481c5-c791-453a-a03e-2eb7a130c132","Type":"ContainerStarted","Data":"5ab3640148c099c0ec33ce631fff1b3b79aee3ac200ae1e2c1a1b462e1f1ec32"} Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.499386 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z" Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.499763 4835 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gqskc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.499879 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" podUID="c21c247b-8282-4ea0-aaac-cd2908a9cfac" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.501605 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-zbzmb" podStartSLOduration=126.501592808 podStartE2EDuration="2m6.501592808s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:58.500885899 +0000 UTC m=+147.792878804" watchObservedRunningTime="2026-02-16 15:09:58.501592808 +0000 UTC m=+147.793585703" Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.515133 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vcfft" Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.518258 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-ljr7z" Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.551411 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.551644 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.051571185 +0000 UTC m=+148.343564080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.552672 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.568854 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.06882776 +0000 UTC m=+148.360820655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.589504 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6qtn4" podStartSLOduration=126.589475729 podStartE2EDuration="2m6.589475729s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:58.574953279 +0000 UTC m=+147.866946184" watchObservedRunningTime="2026-02-16 15:09:58.589475729 +0000 UTC m=+147.881468624" Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.624673 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-2d8lr" podStartSLOduration=9.624647518 podStartE2EDuration="9.624647518s" podCreationTimestamp="2026-02-16 15:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:58.623396183 +0000 UTC m=+147.915389078" watchObservedRunningTime="2026-02-16 15:09:58.624647518 +0000 UTC m=+147.916640413" Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.655870 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.656140 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.656784 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.657086 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.157070221 +0000 UTC m=+148.449063116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.675892 4835 patch_prober.go:28] interesting pod/apiserver-76f77b778f-rcbfk container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.675959 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" podUID="94c481c5-c791-453a-a03e-2eb7a130c132" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.691542 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rhzdd" podStartSLOduration=126.69150956 podStartE2EDuration="2m6.69150956s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:58.665766561 +0000 UTC m=+147.957759466" watchObservedRunningTime="2026-02-16 15:09:58.69150956 +0000 UTC m=+147.983502455" Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.758698 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.759280 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.259261006 +0000 UTC m=+148.551253901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.860511 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.860693 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.360658149 +0000 UTC m=+148.652651054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.860890 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.861234 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.361223925 +0000 UTC m=+148.653217020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.902915 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7qb6k" Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.920126 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" podStartSLOduration=127.920108587 podStartE2EDuration="2m7.920108587s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:58.767197135 +0000 UTC m=+148.059190040" watchObservedRunningTime="2026-02-16 15:09:58.920108587 +0000 UTC m=+148.212101482" Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.962306 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.962547 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.462496474 +0000 UTC m=+148.754489369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:58 crc kubenswrapper[4835]: I0216 15:09:58.962810 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:58 crc kubenswrapper[4835]: E0216 15:09:58.963307 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.463290816 +0000 UTC m=+148.755283711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.063959 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:59 crc kubenswrapper[4835]: E0216 15:09:59.064168 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.564139893 +0000 UTC m=+148.856132788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.064338 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:59 crc kubenswrapper[4835]: E0216 15:09:59.064661 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.564649577 +0000 UTC m=+148.856642462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.119983 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:09:59 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:09:59 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:09:59 crc kubenswrapper[4835]: healthz check failed Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.120105 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.165184 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:59 crc kubenswrapper[4835]: E0216 15:09:59.165369 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.665343481 +0000 UTC m=+148.957336376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.165631 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:59 crc kubenswrapper[4835]: E0216 15:09:59.165907 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.665900666 +0000 UTC m=+148.957893561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.267094 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:59 crc kubenswrapper[4835]: E0216 15:09:59.267680 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.767660369 +0000 UTC m=+149.059653264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.267924 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:59 crc kubenswrapper[4835]: E0216 15:09:59.268496 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.768470542 +0000 UTC m=+149.060463437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.369682 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:59 crc kubenswrapper[4835]: E0216 15:09:59.369905 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.869876065 +0000 UTC m=+149.161868960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.369960 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.370008 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.370082 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.370130 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.370165 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:59 crc kubenswrapper[4835]: E0216 15:09:59.370599 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.870577664 +0000 UTC m=+149.162570559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.371952 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.380685 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.380821 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.382794 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.471092 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:59 crc kubenswrapper[4835]: E0216 15:09:59.471405 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.97134851 +0000 UTC m=+149.263341405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.471860 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:59 crc kubenswrapper[4835]: E0216 15:09:59.472272 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:09:59.972252445 +0000 UTC m=+149.264245340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.501821 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.505017 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" event={"ID":"aeddce00-4ffb-40ba-832b-a2d30aee4528","Type":"ContainerStarted","Data":"c1f60903c976980dfd68677efcfb116b3dabc6ed692f076f31a4ef7f895a96df"} Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.505077 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" event={"ID":"aeddce00-4ffb-40ba-832b-a2d30aee4528","Type":"ContainerStarted","Data":"c7577495c171ccfc08dc42b986d73ecfb4c536d206b27cb18cf45cc021545d74"} Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.505976 4835 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gqskc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.506019 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" podUID="c21c247b-8282-4ea0-aaac-cd2908a9cfac" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.573383 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:59 crc kubenswrapper[4835]: E0216 15:09:59.575257 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:10:00.075222071 +0000 UTC m=+149.367214966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.613226 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.622494 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.688548 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:59 crc kubenswrapper[4835]: E0216 15:09:59.688950 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:10:00.188935274 +0000 UTC m=+149.480928169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.735222 4835 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.789902 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:09:59 crc kubenswrapper[4835]: E0216 15:09:59.790505 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:10:00.290488731 +0000 UTC m=+149.582481626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.854635 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wm95t"] Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.855520 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wm95t" Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.858194 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.874679 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wm95t"] Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.893959 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:09:59 crc kubenswrapper[4835]: E0216 15:09:59.894244 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:10:00.394232239 +0000 UTC m=+149.686225134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.916693 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.995101 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/569d3eef-2b86-44fb-90a1-2bceae4d2e09-config-volume\") pod \"569d3eef-2b86-44fb-90a1-2bceae4d2e09\" (UID: \"569d3eef-2b86-44fb-90a1-2bceae4d2e09\") " Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.995211 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/569d3eef-2b86-44fb-90a1-2bceae4d2e09-secret-volume\") pod \"569d3eef-2b86-44fb-90a1-2bceae4d2e09\" (UID: \"569d3eef-2b86-44fb-90a1-2bceae4d2e09\") " Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.995304 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8d2z\" (UniqueName: \"kubernetes.io/projected/569d3eef-2b86-44fb-90a1-2bceae4d2e09-kube-api-access-n8d2z\") pod \"569d3eef-2b86-44fb-90a1-2bceae4d2e09\" (UID: \"569d3eef-2b86-44fb-90a1-2bceae4d2e09\") " Feb 16 15:09:59 crc kubenswrapper[4835]: I0216 15:09:59.997681 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/569d3eef-2b86-44fb-90a1-2bceae4d2e09-config-volume" (OuterVolumeSpecName: "config-volume") pod "569d3eef-2b86-44fb-90a1-2bceae4d2e09" (UID: "569d3eef-2b86-44fb-90a1-2bceae4d2e09"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:09:59 crc kubenswrapper[4835]: E0216 15:09:59.997991 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:10:00.497974186 +0000 UTC m=+149.789967081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.003559 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.003866 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1fe6dd1-829b-4120-8585-040e9032f292-utilities\") pod \"certified-operators-wm95t\" (UID: \"e1fe6dd1-829b-4120-8585-040e9032f292\") " pod="openshift-marketplace/certified-operators-wm95t" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.003902 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1fe6dd1-829b-4120-8585-040e9032f292-catalog-content\") pod \"certified-operators-wm95t\" (UID: \"e1fe6dd1-829b-4120-8585-040e9032f292\") " pod="openshift-marketplace/certified-operators-wm95t" Feb 16 15:10:00 crc kubenswrapper[4835]: E0216 15:10:00.004367 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:10:00.504350462 +0000 UTC m=+149.796343357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.004585 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.004617 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxn2f\" (UniqueName: \"kubernetes.io/projected/e1fe6dd1-829b-4120-8585-040e9032f292-kube-api-access-cxn2f\") pod \"certified-operators-wm95t\" (UID: \"e1fe6dd1-829b-4120-8585-040e9032f292\") " pod="openshift-marketplace/certified-operators-wm95t" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.004716 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/569d3eef-2b86-44fb-90a1-2bceae4d2e09-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.012048 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569d3eef-2b86-44fb-90a1-2bceae4d2e09-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "569d3eef-2b86-44fb-90a1-2bceae4d2e09" (UID: "569d3eef-2b86-44fb-90a1-2bceae4d2e09"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.029965 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/569d3eef-2b86-44fb-90a1-2bceae4d2e09-kube-api-access-n8d2z" (OuterVolumeSpecName: "kube-api-access-n8d2z") pod "569d3eef-2b86-44fb-90a1-2bceae4d2e09" (UID: "569d3eef-2b86-44fb-90a1-2bceae4d2e09"). InnerVolumeSpecName "kube-api-access-n8d2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.064718 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-98n7v"] Feb 16 15:10:00 crc kubenswrapper[4835]: E0216 15:10:00.076825 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569d3eef-2b86-44fb-90a1-2bceae4d2e09" containerName="collect-profiles" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.077026 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="569d3eef-2b86-44fb-90a1-2bceae4d2e09" containerName="collect-profiles" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.080482 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="569d3eef-2b86-44fb-90a1-2bceae4d2e09" containerName="collect-profiles" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.082614 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98n7v" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.092609 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-98n7v"] Feb 16 15:10:00 crc kubenswrapper[4835]: W0216 15:10:00.094206 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-a95a2ad591f70593f3b82837bebe5682ed1c9518974ac06b99f9fe2dd3fe83e7 WatchSource:0}: Error finding container a95a2ad591f70593f3b82837bebe5682ed1c9518974ac06b99f9fe2dd3fe83e7: Status 404 returned error can't find the container with id a95a2ad591f70593f3b82837bebe5682ed1c9518974ac06b99f9fe2dd3fe83e7 Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.094727 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.105503 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.105975 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1fe6dd1-829b-4120-8585-040e9032f292-utilities\") pod \"certified-operators-wm95t\" (UID: \"e1fe6dd1-829b-4120-8585-040e9032f292\") " pod="openshift-marketplace/certified-operators-wm95t" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.106024 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1fe6dd1-829b-4120-8585-040e9032f292-catalog-content\") pod \"certified-operators-wm95t\" (UID: \"e1fe6dd1-829b-4120-8585-040e9032f292\") " pod="openshift-marketplace/certified-operators-wm95t" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.106072 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxn2f\" (UniqueName: \"kubernetes.io/projected/e1fe6dd1-829b-4120-8585-040e9032f292-kube-api-access-cxn2f\") pod \"certified-operators-wm95t\" (UID: \"e1fe6dd1-829b-4120-8585-040e9032f292\") " pod="openshift-marketplace/certified-operators-wm95t" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.106142 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/569d3eef-2b86-44fb-90a1-2bceae4d2e09-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.106160 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8d2z\" (UniqueName: \"kubernetes.io/projected/569d3eef-2b86-44fb-90a1-2bceae4d2e09-kube-api-access-n8d2z\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:00 crc kubenswrapper[4835]: E0216 15:10:00.106519 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:10:00.606503536 +0000 UTC m=+149.898496431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.106888 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1fe6dd1-829b-4120-8585-040e9032f292-utilities\") pod \"certified-operators-wm95t\" (UID: \"e1fe6dd1-829b-4120-8585-040e9032f292\") " pod="openshift-marketplace/certified-operators-wm95t" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.107116 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1fe6dd1-829b-4120-8585-040e9032f292-catalog-content\") pod \"certified-operators-wm95t\" (UID: \"e1fe6dd1-829b-4120-8585-040e9032f292\") " pod="openshift-marketplace/certified-operators-wm95t" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.126634 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxn2f\" (UniqueName: \"kubernetes.io/projected/e1fe6dd1-829b-4120-8585-040e9032f292-kube-api-access-cxn2f\") pod \"certified-operators-wm95t\" (UID: \"e1fe6dd1-829b-4120-8585-040e9032f292\") " pod="openshift-marketplace/certified-operators-wm95t" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.127727 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:10:00 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:10:00 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:10:00 crc kubenswrapper[4835]: healthz check failed Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.127793 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.189827 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wm95t" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.210184 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61564e44-b4e6-4a57-9232-3403b0173aa6-utilities\") pod \"community-operators-98n7v\" (UID: \"61564e44-b4e6-4a57-9232-3403b0173aa6\") " pod="openshift-marketplace/community-operators-98n7v" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.210250 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61564e44-b4e6-4a57-9232-3403b0173aa6-catalog-content\") pod \"community-operators-98n7v\" (UID: \"61564e44-b4e6-4a57-9232-3403b0173aa6\") " pod="openshift-marketplace/community-operators-98n7v" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.210344 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtxr4\" (UniqueName: \"kubernetes.io/projected/61564e44-b4e6-4a57-9232-3403b0173aa6-kube-api-access-mtxr4\") pod \"community-operators-98n7v\" (UID: \"61564e44-b4e6-4a57-9232-3403b0173aa6\") " pod="openshift-marketplace/community-operators-98n7v" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.210373 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:10:00 crc kubenswrapper[4835]: E0216 15:10:00.210712 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:10:00.710699556 +0000 UTC m=+150.002692451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.256535 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kxxft"] Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.257701 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxxft" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.270153 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kxxft"] Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.314011 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:10:00 crc kubenswrapper[4835]: E0216 15:10:00.314187 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:10:00.814159126 +0000 UTC m=+150.106152021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.314399 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61564e44-b4e6-4a57-9232-3403b0173aa6-utilities\") pod \"community-operators-98n7v\" (UID: \"61564e44-b4e6-4a57-9232-3403b0173aa6\") " pod="openshift-marketplace/community-operators-98n7v" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.314491 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61564e44-b4e6-4a57-9232-3403b0173aa6-catalog-content\") pod \"community-operators-98n7v\" (UID: \"61564e44-b4e6-4a57-9232-3403b0173aa6\") " pod="openshift-marketplace/community-operators-98n7v" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.314548 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtxr4\" (UniqueName: \"kubernetes.io/projected/61564e44-b4e6-4a57-9232-3403b0173aa6-kube-api-access-mtxr4\") pod \"community-operators-98n7v\" (UID: \"61564e44-b4e6-4a57-9232-3403b0173aa6\") " pod="openshift-marketplace/community-operators-98n7v" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.314588 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:10:00 crc kubenswrapper[4835]: E0216 15:10:00.315038 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:10:00.81502146 +0000 UTC m=+150.107014355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.315557 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61564e44-b4e6-4a57-9232-3403b0173aa6-utilities\") pod \"community-operators-98n7v\" (UID: \"61564e44-b4e6-4a57-9232-3403b0173aa6\") " pod="openshift-marketplace/community-operators-98n7v" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.315880 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61564e44-b4e6-4a57-9232-3403b0173aa6-catalog-content\") pod \"community-operators-98n7v\" (UID: \"61564e44-b4e6-4a57-9232-3403b0173aa6\") " pod="openshift-marketplace/community-operators-98n7v" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.338423 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtxr4\" (UniqueName: \"kubernetes.io/projected/61564e44-b4e6-4a57-9232-3403b0173aa6-kube-api-access-mtxr4\") pod \"community-operators-98n7v\" (UID: \"61564e44-b4e6-4a57-9232-3403b0173aa6\") " pod="openshift-marketplace/community-operators-98n7v" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.416799 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.417062 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m9mp\" (UniqueName: \"kubernetes.io/projected/c7361241-f3c4-483a-9aa8-d1af72ab348b-kube-api-access-2m9mp\") pod \"certified-operators-kxxft\" (UID: \"c7361241-f3c4-483a-9aa8-d1af72ab348b\") " pod="openshift-marketplace/certified-operators-kxxft" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.417152 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7361241-f3c4-483a-9aa8-d1af72ab348b-catalog-content\") pod \"certified-operators-kxxft\" (UID: \"c7361241-f3c4-483a-9aa8-d1af72ab348b\") " pod="openshift-marketplace/certified-operators-kxxft" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.417201 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7361241-f3c4-483a-9aa8-d1af72ab348b-utilities\") pod \"certified-operators-kxxft\" (UID: \"c7361241-f3c4-483a-9aa8-d1af72ab348b\") " pod="openshift-marketplace/certified-operators-kxxft" Feb 16 15:10:00 crc kubenswrapper[4835]: E0216 15:10:00.417318 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 15:10:00.917296357 +0000 UTC m=+150.209289252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.429711 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98n7v" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.458697 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qlnj4"] Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.460008 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlnj4" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.468036 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qlnj4"] Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.495349 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wm95t"] Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.516924 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5a90b23500bd5a48840ac9b76b86f4461c48d455ee910a35ecaba2cb339c2e56"} Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.517004 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"05e776fe3a954a7c19e810cf153627d122889f7762ffbdb26fd62b9c50dbf05e"} Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.518197 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7361241-f3c4-483a-9aa8-d1af72ab348b-catalog-content\") pod \"certified-operators-kxxft\" (UID: \"c7361241-f3c4-483a-9aa8-d1af72ab348b\") " pod="openshift-marketplace/certified-operators-kxxft" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.518279 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7361241-f3c4-483a-9aa8-d1af72ab348b-utilities\") pod \"certified-operators-kxxft\" (UID: \"c7361241-f3c4-483a-9aa8-d1af72ab348b\") " pod="openshift-marketplace/certified-operators-kxxft" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.518313 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m9mp\" (UniqueName: \"kubernetes.io/projected/c7361241-f3c4-483a-9aa8-d1af72ab348b-kube-api-access-2m9mp\") pod \"certified-operators-kxxft\" (UID: \"c7361241-f3c4-483a-9aa8-d1af72ab348b\") " pod="openshift-marketplace/certified-operators-kxxft" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.518352 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.518687 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7361241-f3c4-483a-9aa8-d1af72ab348b-catalog-content\") pod \"certified-operators-kxxft\" (UID: \"c7361241-f3c4-483a-9aa8-d1af72ab348b\") " pod="openshift-marketplace/certified-operators-kxxft" Feb 16 15:10:00 crc kubenswrapper[4835]: E0216 15:10:00.518811 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 15:10:01.018794733 +0000 UTC m=+150.310787628 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rxmc7" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.519860 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7361241-f3c4-483a-9aa8-d1af72ab348b-utilities\") pod \"certified-operators-kxxft\" (UID: \"c7361241-f3c4-483a-9aa8-d1af72ab348b\") " pod="openshift-marketplace/certified-operators-kxxft" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.523667 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.523694 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh" event={"ID":"569d3eef-2b86-44fb-90a1-2bceae4d2e09","Type":"ContainerDied","Data":"09d0aabba92f6623b3149a4a24558dd2a14e4f8003950252159b0378ecfb6597"} Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.523795 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09d0aabba92f6623b3149a4a24558dd2a14e4f8003950252159b0378ecfb6597" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.543924 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"69520d3baf29587055c154345b617df6f4128fd374cb8d6fdaf8a6c8584380b9"} Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.543996 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"735edbab4a00dfef34a5f16e21f7f32ddec9c8f2dfff62a9c56790af6d80c535"} Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.549924 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m9mp\" (UniqueName: \"kubernetes.io/projected/c7361241-f3c4-483a-9aa8-d1af72ab348b-kube-api-access-2m9mp\") pod \"certified-operators-kxxft\" (UID: \"c7361241-f3c4-483a-9aa8-d1af72ab348b\") " pod="openshift-marketplace/certified-operators-kxxft" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.566611 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"468e882aa23e2202a0894e58d6577280102ec9477fa71d6328b137dae270185c"} Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.566674 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a95a2ad591f70593f3b82837bebe5682ed1c9518974ac06b99f9fe2dd3fe83e7"} Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.567562 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.583025 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm95t" event={"ID":"e1fe6dd1-829b-4120-8585-040e9032f292","Type":"ContainerStarted","Data":"f53b0a889a88061cd55eeb7fc92ef61e0fca6478f4565d2a15e7b56c90a79c7c"} Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.593692 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" event={"ID":"aeddce00-4ffb-40ba-832b-a2d30aee4528","Type":"ContainerStarted","Data":"f897525ea25b619917635a31ebcc62e357291cb083669e4027a94fe1689ea9a0"} Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.605969 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxxft" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.612389 4835 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T15:09:59.73525979Z","Handler":null,"Name":""} Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.616284 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-dcqgf" podStartSLOduration=11.616265618 podStartE2EDuration="11.616265618s" podCreationTimestamp="2026-02-16 15:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:10:00.614308294 +0000 UTC m=+149.906301209" watchObservedRunningTime="2026-02-16 15:10:00.616265618 +0000 UTC m=+149.908258513" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.618976 4835 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.619029 4835 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.619036 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.619329 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flxvd\" (UniqueName: \"kubernetes.io/projected/76be94b7-5f32-478a-81a6-51758b5f7280-kube-api-access-flxvd\") pod \"community-operators-qlnj4\" (UID: \"76be94b7-5f32-478a-81a6-51758b5f7280\") " pod="openshift-marketplace/community-operators-qlnj4" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.619434 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76be94b7-5f32-478a-81a6-51758b5f7280-utilities\") pod \"community-operators-qlnj4\" (UID: \"76be94b7-5f32-478a-81a6-51758b5f7280\") " pod="openshift-marketplace/community-operators-qlnj4" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.619478 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76be94b7-5f32-478a-81a6-51758b5f7280-catalog-content\") pod \"community-operators-qlnj4\" (UID: \"76be94b7-5f32-478a-81a6-51758b5f7280\") " pod="openshift-marketplace/community-operators-qlnj4" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.630403 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.720909 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76be94b7-5f32-478a-81a6-51758b5f7280-catalog-content\") pod \"community-operators-qlnj4\" (UID: \"76be94b7-5f32-478a-81a6-51758b5f7280\") " pod="openshift-marketplace/community-operators-qlnj4" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.721018 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flxvd\" (UniqueName: \"kubernetes.io/projected/76be94b7-5f32-478a-81a6-51758b5f7280-kube-api-access-flxvd\") pod \"community-operators-qlnj4\" (UID: \"76be94b7-5f32-478a-81a6-51758b5f7280\") " pod="openshift-marketplace/community-operators-qlnj4" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.721090 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.721222 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76be94b7-5f32-478a-81a6-51758b5f7280-utilities\") pod \"community-operators-qlnj4\" (UID: \"76be94b7-5f32-478a-81a6-51758b5f7280\") " pod="openshift-marketplace/community-operators-qlnj4" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.723023 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76be94b7-5f32-478a-81a6-51758b5f7280-catalog-content\") pod \"community-operators-qlnj4\" (UID: \"76be94b7-5f32-478a-81a6-51758b5f7280\") " pod="openshift-marketplace/community-operators-qlnj4" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.724857 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76be94b7-5f32-478a-81a6-51758b5f7280-utilities\") pod \"community-operators-qlnj4\" (UID: \"76be94b7-5f32-478a-81a6-51758b5f7280\") " pod="openshift-marketplace/community-operators-qlnj4" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.730752 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.730819 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.732676 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-98n7v"] Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.745062 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flxvd\" (UniqueName: \"kubernetes.io/projected/76be94b7-5f32-478a-81a6-51758b5f7280-kube-api-access-flxvd\") pod \"community-operators-qlnj4\" (UID: \"76be94b7-5f32-478a-81a6-51758b5f7280\") " pod="openshift-marketplace/community-operators-qlnj4" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.759084 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rxmc7\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.803022 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlnj4" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.832507 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:10:00 crc kubenswrapper[4835]: I0216 15:10:00.910511 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kxxft"] Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.080316 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rxmc7"] Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.115261 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:10:01 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:10:01 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:10:01 crc kubenswrapper[4835]: healthz check failed Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.115356 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.138197 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qlnj4"] Feb 16 15:10:01 crc kubenswrapper[4835]: W0216 15:10:01.154435 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76be94b7_5f32_478a_81a6_51758b5f7280.slice/crio-f436e969a6412b1bccbd6793d00650897cb8ddf50030f7cfb21db7e8d2c1c728 WatchSource:0}: Error finding container f436e969a6412b1bccbd6793d00650897cb8ddf50030f7cfb21db7e8d2c1c728: Status 404 returned error can't find the container with id f436e969a6412b1bccbd6793d00650897cb8ddf50030f7cfb21db7e8d2c1c728 Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.386708 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.553923 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.554557 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.557271 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.560400 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.572891 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.600933 4835 generic.go:334] "Generic (PLEG): container finished" podID="c7361241-f3c4-483a-9aa8-d1af72ab348b" containerID="a3d908d4c73f9c8295540a98f1dbc80c0d24afd9b67c9861f0614c045e24cdac" exitCode=0 Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.601028 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxxft" event={"ID":"c7361241-f3c4-483a-9aa8-d1af72ab348b","Type":"ContainerDied","Data":"a3d908d4c73f9c8295540a98f1dbc80c0d24afd9b67c9861f0614c045e24cdac"} Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.601061 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxxft" event={"ID":"c7361241-f3c4-483a-9aa8-d1af72ab348b","Type":"ContainerStarted","Data":"aa79bdb7f9483bfe6a8d60dd0ac78bde56df4c0e8212bedc4eabab9ecc34dbb4"} Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.602986 4835 generic.go:334] "Generic (PLEG): container finished" podID="61564e44-b4e6-4a57-9232-3403b0173aa6" containerID="711d745de04581f60124ca46b6433dc491770cb1bca48fa3f73dff3c436178e0" exitCode=0 Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.603085 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98n7v" event={"ID":"61564e44-b4e6-4a57-9232-3403b0173aa6","Type":"ContainerDied","Data":"711d745de04581f60124ca46b6433dc491770cb1bca48fa3f73dff3c436178e0"} Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.603144 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98n7v" event={"ID":"61564e44-b4e6-4a57-9232-3403b0173aa6","Type":"ContainerStarted","Data":"ed28996c378ff3d6bb968d3e58f0e8b7c57113b715a9274a102d0caae7714c8f"} Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.603190 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.606162 4835 generic.go:334] "Generic (PLEG): container finished" podID="76be94b7-5f32-478a-81a6-51758b5f7280" containerID="bf5a5b48fce6501f49e0e9d653286ed27683965c6d1aca9d669f3bc4a59cb941" exitCode=0 Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.606206 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlnj4" event={"ID":"76be94b7-5f32-478a-81a6-51758b5f7280","Type":"ContainerDied","Data":"bf5a5b48fce6501f49e0e9d653286ed27683965c6d1aca9d669f3bc4a59cb941"} Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.606260 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlnj4" event={"ID":"76be94b7-5f32-478a-81a6-51758b5f7280","Type":"ContainerStarted","Data":"f436e969a6412b1bccbd6793d00650897cb8ddf50030f7cfb21db7e8d2c1c728"} Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.608999 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" event={"ID":"8cb8fe18-6040-4d23-a89b-e338df070e75","Type":"ContainerStarted","Data":"08ab02882c01925488fccb60e7fb0608dd036d4889035fc2029be80c11ca3ae6"} Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.609040 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" event={"ID":"8cb8fe18-6040-4d23-a89b-e338df070e75","Type":"ContainerStarted","Data":"a90502c543c3682ca09092ed2389c380cf99bc25ec7f1478fc2c41619f19c145"} Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.609214 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.625083 4835 generic.go:334] "Generic (PLEG): container finished" podID="e1fe6dd1-829b-4120-8585-040e9032f292" containerID="76f43528ce999acd704dc93a6da3489af0bfe0c6a0cf6dacbbe43e6b282895a7" exitCode=0 Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.625208 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm95t" event={"ID":"e1fe6dd1-829b-4120-8585-040e9032f292","Type":"ContainerDied","Data":"76f43528ce999acd704dc93a6da3489af0bfe0c6a0cf6dacbbe43e6b282895a7"} Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.633329 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90111e4a-d6ab-49a6-bae9-3409f51770ee-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"90111e4a-d6ab-49a6-bae9-3409f51770ee\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.633418 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90111e4a-d6ab-49a6-bae9-3409f51770ee-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"90111e4a-d6ab-49a6-bae9-3409f51770ee\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.646740 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" podStartSLOduration=129.646720392 podStartE2EDuration="2m9.646720392s" podCreationTimestamp="2026-02-16 15:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:10:01.640057659 +0000 UTC m=+150.932050584" watchObservedRunningTime="2026-02-16 15:10:01.646720392 +0000 UTC m=+150.938713287" Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.734682 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90111e4a-d6ab-49a6-bae9-3409f51770ee-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"90111e4a-d6ab-49a6-bae9-3409f51770ee\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.734746 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90111e4a-d6ab-49a6-bae9-3409f51770ee-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"90111e4a-d6ab-49a6-bae9-3409f51770ee\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.734895 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90111e4a-d6ab-49a6-bae9-3409f51770ee-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"90111e4a-d6ab-49a6-bae9-3409f51770ee\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.760624 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90111e4a-d6ab-49a6-bae9-3409f51770ee-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"90111e4a-d6ab-49a6-bae9-3409f51770ee\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.870759 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.996022 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:10:01 crc kubenswrapper[4835]: I0216 15:10:01.996603 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.002413 4835 patch_prober.go:28] interesting pod/console-f9d7485db-28xh9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.002483 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-28xh9" podUID="3d329678-3edc-4b70-9796-85c6ada120de" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.021520 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddnzw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.021585 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ddnzw" podUID="b6f870fb-6a4d-4d9c-9990-83a9af347710" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.021685 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-ddnzw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.021781 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ddnzw" podUID="b6f870fb-6a4d-4d9c-9990-83a9af347710" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.35:8080/\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.051987 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qgqwv"] Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.055143 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgqwv" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.057292 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.069849 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgqwv"] Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.080230 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 15:10:02 crc kubenswrapper[4835]: W0216 15:10:02.097245 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod90111e4a_d6ab_49a6_bae9_3409f51770ee.slice/crio-026ac0cce0ffbaf0c860c915ca7cbeb19b020d883ebb004f867a35032e3bbd64 WatchSource:0}: Error finding container 026ac0cce0ffbaf0c860c915ca7cbeb19b020d883ebb004f867a35032e3bbd64: Status 404 returned error can't find the container with id 026ac0cce0ffbaf0c860c915ca7cbeb19b020d883ebb004f867a35032e3bbd64 Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.117569 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:10:02 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:10:02 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:10:02 crc kubenswrapper[4835]: healthz check failed Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.117645 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.157284 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rd4f\" (UniqueName: \"kubernetes.io/projected/0132c288-a83e-4f3c-b620-4cac59f56df9-kube-api-access-8rd4f\") pod \"redhat-marketplace-qgqwv\" (UID: \"0132c288-a83e-4f3c-b620-4cac59f56df9\") " pod="openshift-marketplace/redhat-marketplace-qgqwv" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.157416 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0132c288-a83e-4f3c-b620-4cac59f56df9-utilities\") pod \"redhat-marketplace-qgqwv\" (UID: \"0132c288-a83e-4f3c-b620-4cac59f56df9\") " pod="openshift-marketplace/redhat-marketplace-qgqwv" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.157456 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0132c288-a83e-4f3c-b620-4cac59f56df9-catalog-content\") pod \"redhat-marketplace-qgqwv\" (UID: \"0132c288-a83e-4f3c-b620-4cac59f56df9\") " pod="openshift-marketplace/redhat-marketplace-qgqwv" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.258681 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rd4f\" (UniqueName: \"kubernetes.io/projected/0132c288-a83e-4f3c-b620-4cac59f56df9-kube-api-access-8rd4f\") pod \"redhat-marketplace-qgqwv\" (UID: \"0132c288-a83e-4f3c-b620-4cac59f56df9\") " pod="openshift-marketplace/redhat-marketplace-qgqwv" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.258745 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0132c288-a83e-4f3c-b620-4cac59f56df9-utilities\") pod \"redhat-marketplace-qgqwv\" (UID: \"0132c288-a83e-4f3c-b620-4cac59f56df9\") " pod="openshift-marketplace/redhat-marketplace-qgqwv" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.258782 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0132c288-a83e-4f3c-b620-4cac59f56df9-catalog-content\") pod \"redhat-marketplace-qgqwv\" (UID: \"0132c288-a83e-4f3c-b620-4cac59f56df9\") " pod="openshift-marketplace/redhat-marketplace-qgqwv" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.259217 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0132c288-a83e-4f3c-b620-4cac59f56df9-catalog-content\") pod \"redhat-marketplace-qgqwv\" (UID: \"0132c288-a83e-4f3c-b620-4cac59f56df9\") " pod="openshift-marketplace/redhat-marketplace-qgqwv" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.259291 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0132c288-a83e-4f3c-b620-4cac59f56df9-utilities\") pod \"redhat-marketplace-qgqwv\" (UID: \"0132c288-a83e-4f3c-b620-4cac59f56df9\") " pod="openshift-marketplace/redhat-marketplace-qgqwv" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.279591 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rd4f\" (UniqueName: \"kubernetes.io/projected/0132c288-a83e-4f3c-b620-4cac59f56df9-kube-api-access-8rd4f\") pod \"redhat-marketplace-qgqwv\" (UID: \"0132c288-a83e-4f3c-b620-4cac59f56df9\") " pod="openshift-marketplace/redhat-marketplace-qgqwv" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.382269 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgqwv" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.451478 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6rzr6"] Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.452431 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rzr6" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.468831 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rzr6"] Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.564414 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqk5s\" (UniqueName: \"kubernetes.io/projected/b5901620-08e0-4ade-974d-e8c241526ff1-kube-api-access-qqk5s\") pod \"redhat-marketplace-6rzr6\" (UID: \"b5901620-08e0-4ade-974d-e8c241526ff1\") " pod="openshift-marketplace/redhat-marketplace-6rzr6" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.564491 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5901620-08e0-4ade-974d-e8c241526ff1-utilities\") pod \"redhat-marketplace-6rzr6\" (UID: \"b5901620-08e0-4ade-974d-e8c241526ff1\") " pod="openshift-marketplace/redhat-marketplace-6rzr6" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.564540 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5901620-08e0-4ade-974d-e8c241526ff1-catalog-content\") pod \"redhat-marketplace-6rzr6\" (UID: \"b5901620-08e0-4ade-974d-e8c241526ff1\") " pod="openshift-marketplace/redhat-marketplace-6rzr6" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.638601 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"90111e4a-d6ab-49a6-bae9-3409f51770ee","Type":"ContainerStarted","Data":"b7d6ed31063cc5d68703ebc63e7b10e3e495d5dbc456df7c9aed07121d3cbc11"} Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.638637 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"90111e4a-d6ab-49a6-bae9-3409f51770ee","Type":"ContainerStarted","Data":"026ac0cce0ffbaf0c860c915ca7cbeb19b020d883ebb004f867a35032e3bbd64"} Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.640226 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgqwv"] Feb 16 15:10:02 crc kubenswrapper[4835]: W0216 15:10:02.642831 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0132c288_a83e_4f3c_b620_4cac59f56df9.slice/crio-9efc25f9e97aeb12ef6403e8d5ac07be08da988e45b714bfd1e7d791a1297691 WatchSource:0}: Error finding container 9efc25f9e97aeb12ef6403e8d5ac07be08da988e45b714bfd1e7d791a1297691: Status 404 returned error can't find the container with id 9efc25f9e97aeb12ef6403e8d5ac07be08da988e45b714bfd1e7d791a1297691 Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.659785 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.659770127 podStartE2EDuration="1.659770127s" podCreationTimestamp="2026-02-16 15:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:10:02.658571874 +0000 UTC m=+151.950564769" watchObservedRunningTime="2026-02-16 15:10:02.659770127 +0000 UTC m=+151.951763022" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.665750 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5901620-08e0-4ade-974d-e8c241526ff1-catalog-content\") pod \"redhat-marketplace-6rzr6\" (UID: \"b5901620-08e0-4ade-974d-e8c241526ff1\") " pod="openshift-marketplace/redhat-marketplace-6rzr6" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.665879 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqk5s\" (UniqueName: \"kubernetes.io/projected/b5901620-08e0-4ade-974d-e8c241526ff1-kube-api-access-qqk5s\") pod \"redhat-marketplace-6rzr6\" (UID: \"b5901620-08e0-4ade-974d-e8c241526ff1\") " pod="openshift-marketplace/redhat-marketplace-6rzr6" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.665946 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5901620-08e0-4ade-974d-e8c241526ff1-utilities\") pod \"redhat-marketplace-6rzr6\" (UID: \"b5901620-08e0-4ade-974d-e8c241526ff1\") " pod="openshift-marketplace/redhat-marketplace-6rzr6" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.666294 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5901620-08e0-4ade-974d-e8c241526ff1-catalog-content\") pod \"redhat-marketplace-6rzr6\" (UID: \"b5901620-08e0-4ade-974d-e8c241526ff1\") " pod="openshift-marketplace/redhat-marketplace-6rzr6" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.666362 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5901620-08e0-4ade-974d-e8c241526ff1-utilities\") pod \"redhat-marketplace-6rzr6\" (UID: \"b5901620-08e0-4ade-974d-e8c241526ff1\") " pod="openshift-marketplace/redhat-marketplace-6rzr6" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.695338 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqk5s\" (UniqueName: \"kubernetes.io/projected/b5901620-08e0-4ade-974d-e8c241526ff1-kube-api-access-qqk5s\") pod \"redhat-marketplace-6rzr6\" (UID: \"b5901620-08e0-4ade-974d-e8c241526ff1\") " pod="openshift-marketplace/redhat-marketplace-6rzr6" Feb 16 15:10:02 crc kubenswrapper[4835]: I0216 15:10:02.804934 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rzr6" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.070260 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2blp8"] Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.078973 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2blp8" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.102477 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.116639 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.126802 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2blp8"] Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.150721 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:10:03 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:10:03 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:10:03 crc kubenswrapper[4835]: healthz check failed Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.150802 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.171083 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rzr6"] Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.174703 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.205813 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-catalog-content\") pod \"redhat-operators-2blp8\" (UID: \"822c5e9d-78fa-4c80-b3f4-e3a0310020a2\") " pod="openshift-marketplace/redhat-operators-2blp8" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.205889 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fq7c\" (UniqueName: \"kubernetes.io/projected/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-kube-api-access-9fq7c\") pod \"redhat-operators-2blp8\" (UID: \"822c5e9d-78fa-4c80-b3f4-e3a0310020a2\") " pod="openshift-marketplace/redhat-operators-2blp8" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.205926 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-utilities\") pod \"redhat-operators-2blp8\" (UID: \"822c5e9d-78fa-4c80-b3f4-e3a0310020a2\") " pod="openshift-marketplace/redhat-operators-2blp8" Feb 16 15:10:03 crc kubenswrapper[4835]: W0216 15:10:03.238813 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5901620_08e0_4ade_974d_e8c241526ff1.slice/crio-4adef6c0ca4911f0f94d1e5c1918d3cbbff6a1829ef9fc41b2dbbf068d32ba07 WatchSource:0}: Error finding container 4adef6c0ca4911f0f94d1e5c1918d3cbbff6a1829ef9fc41b2dbbf068d32ba07: Status 404 returned error can't find the container with id 4adef6c0ca4911f0f94d1e5c1918d3cbbff6a1829ef9fc41b2dbbf068d32ba07 Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.307570 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-catalog-content\") pod \"redhat-operators-2blp8\" (UID: \"822c5e9d-78fa-4c80-b3f4-e3a0310020a2\") " pod="openshift-marketplace/redhat-operators-2blp8" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.307620 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fq7c\" (UniqueName: \"kubernetes.io/projected/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-kube-api-access-9fq7c\") pod \"redhat-operators-2blp8\" (UID: \"822c5e9d-78fa-4c80-b3f4-e3a0310020a2\") " pod="openshift-marketplace/redhat-operators-2blp8" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.307672 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-utilities\") pod \"redhat-operators-2blp8\" (UID: \"822c5e9d-78fa-4c80-b3f4-e3a0310020a2\") " pod="openshift-marketplace/redhat-operators-2blp8" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.309657 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-catalog-content\") pod \"redhat-operators-2blp8\" (UID: \"822c5e9d-78fa-4c80-b3f4-e3a0310020a2\") " pod="openshift-marketplace/redhat-operators-2blp8" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.310680 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-utilities\") pod \"redhat-operators-2blp8\" (UID: \"822c5e9d-78fa-4c80-b3f4-e3a0310020a2\") " pod="openshift-marketplace/redhat-operators-2blp8" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.340625 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fq7c\" (UniqueName: \"kubernetes.io/projected/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-kube-api-access-9fq7c\") pod \"redhat-operators-2blp8\" (UID: \"822c5e9d-78fa-4c80-b3f4-e3a0310020a2\") " pod="openshift-marketplace/redhat-operators-2blp8" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.455978 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2blp8" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.462483 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2j7l9"] Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.463793 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2j7l9" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.467836 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2j7l9"] Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.615201 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2498fe6c-9af0-4225-8450-558085a67825-catalog-content\") pod \"redhat-operators-2j7l9\" (UID: \"2498fe6c-9af0-4225-8450-558085a67825\") " pod="openshift-marketplace/redhat-operators-2j7l9" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.615360 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr72v\" (UniqueName: \"kubernetes.io/projected/2498fe6c-9af0-4225-8450-558085a67825-kube-api-access-mr72v\") pod \"redhat-operators-2j7l9\" (UID: \"2498fe6c-9af0-4225-8450-558085a67825\") " pod="openshift-marketplace/redhat-operators-2j7l9" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.615396 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2498fe6c-9af0-4225-8450-558085a67825-utilities\") pod \"redhat-operators-2j7l9\" (UID: \"2498fe6c-9af0-4225-8450-558085a67825\") " pod="openshift-marketplace/redhat-operators-2j7l9" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.649757 4835 generic.go:334] "Generic (PLEG): container finished" podID="0132c288-a83e-4f3c-b620-4cac59f56df9" containerID="389a2aafc088ff9397af6d681ef912e1d57ce364ce49cfc1be2520c7f0404433" exitCode=0 Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.649867 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgqwv" event={"ID":"0132c288-a83e-4f3c-b620-4cac59f56df9","Type":"ContainerDied","Data":"389a2aafc088ff9397af6d681ef912e1d57ce364ce49cfc1be2520c7f0404433"} Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.650061 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgqwv" event={"ID":"0132c288-a83e-4f3c-b620-4cac59f56df9","Type":"ContainerStarted","Data":"9efc25f9e97aeb12ef6403e8d5ac07be08da988e45b714bfd1e7d791a1297691"} Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.656579 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rzr6" event={"ID":"b5901620-08e0-4ade-974d-e8c241526ff1","Type":"ContainerStarted","Data":"4adef6c0ca4911f0f94d1e5c1918d3cbbff6a1829ef9fc41b2dbbf068d32ba07"} Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.660326 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.668888 4835 generic.go:334] "Generic (PLEG): container finished" podID="90111e4a-d6ab-49a6-bae9-3409f51770ee" containerID="b7d6ed31063cc5d68703ebc63e7b10e3e495d5dbc456df7c9aed07121d3cbc11" exitCode=0 Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.668958 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"90111e4a-d6ab-49a6-bae9-3409f51770ee","Type":"ContainerDied","Data":"b7d6ed31063cc5d68703ebc63e7b10e3e495d5dbc456df7c9aed07121d3cbc11"} Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.669415 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-rcbfk" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.716435 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2498fe6c-9af0-4225-8450-558085a67825-catalog-content\") pod \"redhat-operators-2j7l9\" (UID: \"2498fe6c-9af0-4225-8450-558085a67825\") " pod="openshift-marketplace/redhat-operators-2j7l9" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.716520 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr72v\" (UniqueName: \"kubernetes.io/projected/2498fe6c-9af0-4225-8450-558085a67825-kube-api-access-mr72v\") pod \"redhat-operators-2j7l9\" (UID: \"2498fe6c-9af0-4225-8450-558085a67825\") " pod="openshift-marketplace/redhat-operators-2j7l9" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.716568 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2498fe6c-9af0-4225-8450-558085a67825-utilities\") pod \"redhat-operators-2j7l9\" (UID: \"2498fe6c-9af0-4225-8450-558085a67825\") " pod="openshift-marketplace/redhat-operators-2j7l9" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.717025 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2498fe6c-9af0-4225-8450-558085a67825-utilities\") pod \"redhat-operators-2j7l9\" (UID: \"2498fe6c-9af0-4225-8450-558085a67825\") " pod="openshift-marketplace/redhat-operators-2j7l9" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.717240 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2498fe6c-9af0-4225-8450-558085a67825-catalog-content\") pod \"redhat-operators-2j7l9\" (UID: \"2498fe6c-9af0-4225-8450-558085a67825\") " pod="openshift-marketplace/redhat-operators-2j7l9" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.803721 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr72v\" (UniqueName: \"kubernetes.io/projected/2498fe6c-9af0-4225-8450-558085a67825-kube-api-access-mr72v\") pod \"redhat-operators-2j7l9\" (UID: \"2498fe6c-9af0-4225-8450-558085a67825\") " pod="openshift-marketplace/redhat-operators-2j7l9" Feb 16 15:10:03 crc kubenswrapper[4835]: I0216 15:10:03.920323 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2blp8"] Feb 16 15:10:03 crc kubenswrapper[4835]: W0216 15:10:03.949244 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod822c5e9d_78fa_4c80_b3f4_e3a0310020a2.slice/crio-8bb4d6bde9f857761f4f8f566d493fd7a9f74d849e17664fa141174f74673979 WatchSource:0}: Error finding container 8bb4d6bde9f857761f4f8f566d493fd7a9f74d849e17664fa141174f74673979: Status 404 returned error can't find the container with id 8bb4d6bde9f857761f4f8f566d493fd7a9f74d849e17664fa141174f74673979 Feb 16 15:10:04 crc kubenswrapper[4835]: I0216 15:10:04.100893 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2j7l9" Feb 16 15:10:04 crc kubenswrapper[4835]: I0216 15:10:04.121146 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:10:04 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:10:04 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:10:04 crc kubenswrapper[4835]: healthz check failed Feb 16 15:10:04 crc kubenswrapper[4835]: I0216 15:10:04.121203 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:10:04 crc kubenswrapper[4835]: I0216 15:10:04.493245 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2j7l9"] Feb 16 15:10:04 crc kubenswrapper[4835]: W0216 15:10:04.551516 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2498fe6c_9af0_4225_8450_558085a67825.slice/crio-68154f7b236b0125ba0172ca5e100da126a4b77e24a9d177f79b60e5488fe97e WatchSource:0}: Error finding container 68154f7b236b0125ba0172ca5e100da126a4b77e24a9d177f79b60e5488fe97e: Status 404 returned error can't find the container with id 68154f7b236b0125ba0172ca5e100da126a4b77e24a9d177f79b60e5488fe97e Feb 16 15:10:04 crc kubenswrapper[4835]: I0216 15:10:04.681906 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2j7l9" event={"ID":"2498fe6c-9af0-4225-8450-558085a67825","Type":"ContainerStarted","Data":"68154f7b236b0125ba0172ca5e100da126a4b77e24a9d177f79b60e5488fe97e"} Feb 16 15:10:04 crc kubenswrapper[4835]: I0216 15:10:04.685457 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2blp8" event={"ID":"822c5e9d-78fa-4c80-b3f4-e3a0310020a2","Type":"ContainerStarted","Data":"8bb4d6bde9f857761f4f8f566d493fd7a9f74d849e17664fa141174f74673979"} Feb 16 15:10:04 crc kubenswrapper[4835]: I0216 15:10:04.697580 4835 generic.go:334] "Generic (PLEG): container finished" podID="b5901620-08e0-4ade-974d-e8c241526ff1" containerID="2a047c83a3f4926f2bd52fec600d23ccddf25d69bd9b4305724b4404f04ff6c6" exitCode=0 Feb 16 15:10:04 crc kubenswrapper[4835]: I0216 15:10:04.698423 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rzr6" event={"ID":"b5901620-08e0-4ade-974d-e8c241526ff1","Type":"ContainerDied","Data":"2a047c83a3f4926f2bd52fec600d23ccddf25d69bd9b4305724b4404f04ff6c6"} Feb 16 15:10:05 crc kubenswrapper[4835]: I0216 15:10:05.100055 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 15:10:05 crc kubenswrapper[4835]: I0216 15:10:05.118996 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:10:05 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:10:05 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:10:05 crc kubenswrapper[4835]: healthz check failed Feb 16 15:10:05 crc kubenswrapper[4835]: I0216 15:10:05.119048 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:10:05 crc kubenswrapper[4835]: I0216 15:10:05.293803 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90111e4a-d6ab-49a6-bae9-3409f51770ee-kubelet-dir\") pod \"90111e4a-d6ab-49a6-bae9-3409f51770ee\" (UID: \"90111e4a-d6ab-49a6-bae9-3409f51770ee\") " Feb 16 15:10:05 crc kubenswrapper[4835]: I0216 15:10:05.294549 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90111e4a-d6ab-49a6-bae9-3409f51770ee-kube-api-access\") pod \"90111e4a-d6ab-49a6-bae9-3409f51770ee\" (UID: \"90111e4a-d6ab-49a6-bae9-3409f51770ee\") " Feb 16 15:10:05 crc kubenswrapper[4835]: I0216 15:10:05.294573 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90111e4a-d6ab-49a6-bae9-3409f51770ee-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "90111e4a-d6ab-49a6-bae9-3409f51770ee" (UID: "90111e4a-d6ab-49a6-bae9-3409f51770ee"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:10:05 crc kubenswrapper[4835]: I0216 15:10:05.297290 4835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90111e4a-d6ab-49a6-bae9-3409f51770ee-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:05 crc kubenswrapper[4835]: I0216 15:10:05.303090 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90111e4a-d6ab-49a6-bae9-3409f51770ee-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "90111e4a-d6ab-49a6-bae9-3409f51770ee" (UID: "90111e4a-d6ab-49a6-bae9-3409f51770ee"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:10:05 crc kubenswrapper[4835]: I0216 15:10:05.400423 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90111e4a-d6ab-49a6-bae9-3409f51770ee-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:05 crc kubenswrapper[4835]: I0216 15:10:05.710547 4835 generic.go:334] "Generic (PLEG): container finished" podID="2498fe6c-9af0-4225-8450-558085a67825" containerID="aaa29dff2218383817ca3d3c7c3b38e4dc83a4f64e03b8a9da392a9f1acca9df" exitCode=0 Feb 16 15:10:05 crc kubenswrapper[4835]: I0216 15:10:05.710634 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2j7l9" event={"ID":"2498fe6c-9af0-4225-8450-558085a67825","Type":"ContainerDied","Data":"aaa29dff2218383817ca3d3c7c3b38e4dc83a4f64e03b8a9da392a9f1acca9df"} Feb 16 15:10:05 crc kubenswrapper[4835]: I0216 15:10:05.715239 4835 generic.go:334] "Generic (PLEG): container finished" podID="822c5e9d-78fa-4c80-b3f4-e3a0310020a2" containerID="a2a8dcad2688c5857ebe47e6d8f5eb8b05918cf7f32ae94feffb26bce150526c" exitCode=0 Feb 16 15:10:05 crc kubenswrapper[4835]: I0216 15:10:05.715302 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2blp8" event={"ID":"822c5e9d-78fa-4c80-b3f4-e3a0310020a2","Type":"ContainerDied","Data":"a2a8dcad2688c5857ebe47e6d8f5eb8b05918cf7f32ae94feffb26bce150526c"} Feb 16 15:10:05 crc kubenswrapper[4835]: I0216 15:10:05.736318 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"90111e4a-d6ab-49a6-bae9-3409f51770ee","Type":"ContainerDied","Data":"026ac0cce0ffbaf0c860c915ca7cbeb19b020d883ebb004f867a35032e3bbd64"} Feb 16 15:10:05 crc kubenswrapper[4835]: I0216 15:10:05.736390 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="026ac0cce0ffbaf0c860c915ca7cbeb19b020d883ebb004f867a35032e3bbd64" Feb 16 15:10:05 crc kubenswrapper[4835]: I0216 15:10:05.736505 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 15:10:06 crc kubenswrapper[4835]: I0216 15:10:06.116188 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:10:06 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:10:06 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:10:06 crc kubenswrapper[4835]: healthz check failed Feb 16 15:10:06 crc kubenswrapper[4835]: I0216 15:10:06.116290 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:10:07 crc kubenswrapper[4835]: I0216 15:10:07.117821 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:10:07 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:10:07 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:10:07 crc kubenswrapper[4835]: healthz check failed Feb 16 15:10:07 crc kubenswrapper[4835]: I0216 15:10:07.118216 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:10:08 crc kubenswrapper[4835]: I0216 15:10:08.117718 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:10:08 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:10:08 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:10:08 crc kubenswrapper[4835]: healthz check failed Feb 16 15:10:08 crc kubenswrapper[4835]: I0216 15:10:08.117772 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:10:08 crc kubenswrapper[4835]: I0216 15:10:08.514882 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-2d8lr" Feb 16 15:10:09 crc kubenswrapper[4835]: I0216 15:10:09.115736 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:10:09 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:10:09 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:10:09 crc kubenswrapper[4835]: healthz check failed Feb 16 15:10:09 crc kubenswrapper[4835]: I0216 15:10:09.115792 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:10:09 crc kubenswrapper[4835]: I0216 15:10:09.121223 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 15:10:09 crc kubenswrapper[4835]: E0216 15:10:09.121428 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90111e4a-d6ab-49a6-bae9-3409f51770ee" containerName="pruner" Feb 16 15:10:09 crc kubenswrapper[4835]: I0216 15:10:09.121445 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="90111e4a-d6ab-49a6-bae9-3409f51770ee" containerName="pruner" Feb 16 15:10:09 crc kubenswrapper[4835]: I0216 15:10:09.124333 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="90111e4a-d6ab-49a6-bae9-3409f51770ee" containerName="pruner" Feb 16 15:10:09 crc kubenswrapper[4835]: I0216 15:10:09.124790 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 15:10:09 crc kubenswrapper[4835]: I0216 15:10:09.131110 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 15:10:09 crc kubenswrapper[4835]: I0216 15:10:09.131307 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 15:10:09 crc kubenswrapper[4835]: I0216 15:10:09.134412 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 15:10:09 crc kubenswrapper[4835]: I0216 15:10:09.273845 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 15:10:09 crc kubenswrapper[4835]: I0216 15:10:09.274134 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 15:10:09 crc kubenswrapper[4835]: I0216 15:10:09.375184 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 15:10:09 crc kubenswrapper[4835]: I0216 15:10:09.375277 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 15:10:09 crc kubenswrapper[4835]: I0216 15:10:09.375367 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 15:10:09 crc kubenswrapper[4835]: I0216 15:10:09.398609 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 15:10:09 crc kubenswrapper[4835]: I0216 15:10:09.461079 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 15:10:10 crc kubenswrapper[4835]: I0216 15:10:10.040375 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 15:10:10 crc kubenswrapper[4835]: I0216 15:10:10.117226 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:10:10 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:10:10 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:10:10 crc kubenswrapper[4835]: healthz check failed Feb 16 15:10:10 crc kubenswrapper[4835]: I0216 15:10:10.117283 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:10:10 crc kubenswrapper[4835]: I0216 15:10:10.894886 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092","Type":"ContainerStarted","Data":"e008570f30860888e7b92a572fefd3d23acbe61e86d41fcb05085a2f4e6d6593"} Feb 16 15:10:11 crc kubenswrapper[4835]: I0216 15:10:11.114955 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:10:11 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:10:11 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:10:11 crc kubenswrapper[4835]: healthz check failed Feb 16 15:10:11 crc kubenswrapper[4835]: I0216 15:10:11.115010 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:10:11 crc kubenswrapper[4835]: I0216 15:10:11.996112 4835 patch_prober.go:28] interesting pod/console-f9d7485db-28xh9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 16 15:10:11 crc kubenswrapper[4835]: I0216 15:10:11.996185 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-28xh9" podUID="3d329678-3edc-4b70-9796-85c6ada120de" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 16 15:10:12 crc kubenswrapper[4835]: I0216 15:10:12.033279 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-ddnzw" Feb 16 15:10:12 crc kubenswrapper[4835]: I0216 15:10:12.117670 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:10:12 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:10:12 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:10:12 crc kubenswrapper[4835]: healthz check failed Feb 16 15:10:12 crc kubenswrapper[4835]: I0216 15:10:12.117727 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:10:13 crc kubenswrapper[4835]: I0216 15:10:13.116771 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:10:13 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:10:13 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:10:13 crc kubenswrapper[4835]: healthz check failed Feb 16 15:10:13 crc kubenswrapper[4835]: I0216 15:10:13.117186 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:10:14 crc kubenswrapper[4835]: I0216 15:10:14.116564 4835 patch_prober.go:28] interesting pod/router-default-5444994796-7tzqp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 15:10:14 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 16 15:10:14 crc kubenswrapper[4835]: [+]process-running ok Feb 16 15:10:14 crc kubenswrapper[4835]: healthz check failed Feb 16 15:10:14 crc kubenswrapper[4835]: I0216 15:10:14.116960 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7tzqp" podUID="59d881f5-1d23-49c7-8d84-71231e638736" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 15:10:14 crc kubenswrapper[4835]: I0216 15:10:14.199499 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-swn24"] Feb 16 15:10:14 crc kubenswrapper[4835]: I0216 15:10:14.210450 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" podUID="41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27" containerName="controller-manager" containerID="cri-o://fa8c8a61e4f61a4292a1c63f57132752a6f48c5889cef9c968819fe8bc9c9b12" gracePeriod=30 Feb 16 15:10:14 crc kubenswrapper[4835]: I0216 15:10:14.226087 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv"] Feb 16 15:10:14 crc kubenswrapper[4835]: I0216 15:10:14.231174 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" podUID="e964e5a5-fe00-4d83-8416-2e2bd64c359d" containerName="route-controller-manager" containerID="cri-o://b61f118f70c50f66754a126dee0ea5e57875b369b31a3605296cfeda141ac611" gracePeriod=30 Feb 16 15:10:14 crc kubenswrapper[4835]: I0216 15:10:14.262293 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs\") pod \"network-metrics-daemon-b5nkt\" (UID: \"5121c96d-796f-46b5-8889-b7e74c329b2f\") " pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:10:14 crc kubenswrapper[4835]: I0216 15:10:14.270684 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5121c96d-796f-46b5-8889-b7e74c329b2f-metrics-certs\") pod \"network-metrics-daemon-b5nkt\" (UID: \"5121c96d-796f-46b5-8889-b7e74c329b2f\") " pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:10:14 crc kubenswrapper[4835]: I0216 15:10:14.307121 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-b5nkt" Feb 16 15:10:14 crc kubenswrapper[4835]: I0216 15:10:14.927767 4835 generic.go:334] "Generic (PLEG): container finished" podID="41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27" containerID="fa8c8a61e4f61a4292a1c63f57132752a6f48c5889cef9c968819fe8bc9c9b12" exitCode=0 Feb 16 15:10:14 crc kubenswrapper[4835]: I0216 15:10:14.927855 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" event={"ID":"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27","Type":"ContainerDied","Data":"fa8c8a61e4f61a4292a1c63f57132752a6f48c5889cef9c968819fe8bc9c9b12"} Feb 16 15:10:14 crc kubenswrapper[4835]: I0216 15:10:14.929654 4835 generic.go:334] "Generic (PLEG): container finished" podID="e964e5a5-fe00-4d83-8416-2e2bd64c359d" containerID="b61f118f70c50f66754a126dee0ea5e57875b369b31a3605296cfeda141ac611" exitCode=0 Feb 16 15:10:14 crc kubenswrapper[4835]: I0216 15:10:14.929697 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" event={"ID":"e964e5a5-fe00-4d83-8416-2e2bd64c359d","Type":"ContainerDied","Data":"b61f118f70c50f66754a126dee0ea5e57875b369b31a3605296cfeda141ac611"} Feb 16 15:10:15 crc kubenswrapper[4835]: I0216 15:10:15.125577 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:10:15 crc kubenswrapper[4835]: I0216 15:10:15.128874 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-7tzqp" Feb 16 15:10:18 crc kubenswrapper[4835]: I0216 15:10:18.587193 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:10:18 crc kubenswrapper[4835]: I0216 15:10:18.587700 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:10:20 crc kubenswrapper[4835]: I0216 15:10:20.839097 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:10:22 crc kubenswrapper[4835]: I0216 15:10:22.449061 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:10:22 crc kubenswrapper[4835]: I0216 15:10:22.453857 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:10:22 crc kubenswrapper[4835]: I0216 15:10:22.636733 4835 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-n4srv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 15:10:22 crc kubenswrapper[4835]: I0216 15:10:22.636793 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" podUID="e964e5a5-fe00-4d83-8416-2e2bd64c359d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 15:10:22 crc kubenswrapper[4835]: I0216 15:10:22.732811 4835 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-swn24 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 16 15:10:22 crc kubenswrapper[4835]: I0216 15:10:22.732871 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" podUID="41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.081787 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.086059 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.131815 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v"] Feb 16 15:10:27 crc kubenswrapper[4835]: E0216 15:10:27.132091 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27" containerName="controller-manager" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.132103 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27" containerName="controller-manager" Feb 16 15:10:27 crc kubenswrapper[4835]: E0216 15:10:27.132114 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e964e5a5-fe00-4d83-8416-2e2bd64c359d" containerName="route-controller-manager" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.132120 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e964e5a5-fe00-4d83-8416-2e2bd64c359d" containerName="route-controller-manager" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.132217 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27" containerName="controller-manager" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.132229 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e964e5a5-fe00-4d83-8416-2e2bd64c359d" containerName="route-controller-manager" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.132876 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.149426 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v"] Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.166227 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-proxy-ca-bundles\") pod \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.166339 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-client-ca\") pod \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.166398 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjdkn\" (UniqueName: \"kubernetes.io/projected/e964e5a5-fe00-4d83-8416-2e2bd64c359d-kube-api-access-gjdkn\") pod \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\" (UID: \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\") " Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.166441 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e964e5a5-fe00-4d83-8416-2e2bd64c359d-serving-cert\") pod \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\" (UID: \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\") " Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.166506 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e964e5a5-fe00-4d83-8416-2e2bd64c359d-client-ca\") pod \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\" (UID: \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\") " Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.166590 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-config\") pod \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.166635 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj9qp\" (UniqueName: \"kubernetes.io/projected/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-kube-api-access-cj9qp\") pod \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.166705 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e964e5a5-fe00-4d83-8416-2e2bd64c359d-config\") pod \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\" (UID: \"e964e5a5-fe00-4d83-8416-2e2bd64c359d\") " Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.166793 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-serving-cert\") pod \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\" (UID: \"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27\") " Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.167511 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e964e5a5-fe00-4d83-8416-2e2bd64c359d-client-ca" (OuterVolumeSpecName: "client-ca") pod "e964e5a5-fe00-4d83-8416-2e2bd64c359d" (UID: "e964e5a5-fe00-4d83-8416-2e2bd64c359d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.167540 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27" (UID: "41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.167634 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-client-ca" (OuterVolumeSpecName: "client-ca") pod "41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27" (UID: "41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.168235 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-config" (OuterVolumeSpecName: "config") pod "41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27" (UID: "41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.169041 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e964e5a5-fe00-4d83-8416-2e2bd64c359d-config" (OuterVolumeSpecName: "config") pod "e964e5a5-fe00-4d83-8416-2e2bd64c359d" (UID: "e964e5a5-fe00-4d83-8416-2e2bd64c359d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.175212 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e964e5a5-fe00-4d83-8416-2e2bd64c359d-kube-api-access-gjdkn" (OuterVolumeSpecName: "kube-api-access-gjdkn") pod "e964e5a5-fe00-4d83-8416-2e2bd64c359d" (UID: "e964e5a5-fe00-4d83-8416-2e2bd64c359d"). InnerVolumeSpecName "kube-api-access-gjdkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.177240 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-kube-api-access-cj9qp" (OuterVolumeSpecName: "kube-api-access-cj9qp") pod "41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27" (UID: "41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27"). InnerVolumeSpecName "kube-api-access-cj9qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.177954 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27" (UID: "41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.178301 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e964e5a5-fe00-4d83-8416-2e2bd64c359d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e964e5a5-fe00-4d83-8416-2e2bd64c359d" (UID: "e964e5a5-fe00-4d83-8416-2e2bd64c359d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.268565 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-646ds\" (UniqueName: \"kubernetes.io/projected/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-kube-api-access-646ds\") pod \"route-controller-manager-5c554bd657-tr88v\" (UID: \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\") " pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.268638 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-config\") pod \"route-controller-manager-5c554bd657-tr88v\" (UID: \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\") " pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.268679 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-serving-cert\") pod \"route-controller-manager-5c554bd657-tr88v\" (UID: \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\") " pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.268747 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-client-ca\") pod \"route-controller-manager-5c554bd657-tr88v\" (UID: \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\") " pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.268815 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjdkn\" (UniqueName: \"kubernetes.io/projected/e964e5a5-fe00-4d83-8416-2e2bd64c359d-kube-api-access-gjdkn\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.268829 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e964e5a5-fe00-4d83-8416-2e2bd64c359d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.268844 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e964e5a5-fe00-4d83-8416-2e2bd64c359d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.268856 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.268868 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj9qp\" (UniqueName: \"kubernetes.io/projected/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-kube-api-access-cj9qp\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.268878 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e964e5a5-fe00-4d83-8416-2e2bd64c359d-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.268889 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.268899 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.268914 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.372028 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-646ds\" (UniqueName: \"kubernetes.io/projected/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-kube-api-access-646ds\") pod \"route-controller-manager-5c554bd657-tr88v\" (UID: \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\") " pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.372214 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-config\") pod \"route-controller-manager-5c554bd657-tr88v\" (UID: \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\") " pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.372301 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-serving-cert\") pod \"route-controller-manager-5c554bd657-tr88v\" (UID: \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\") " pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.372451 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-client-ca\") pod \"route-controller-manager-5c554bd657-tr88v\" (UID: \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\") " pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.374308 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-client-ca\") pod \"route-controller-manager-5c554bd657-tr88v\" (UID: \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\") " pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.374853 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-config\") pod \"route-controller-manager-5c554bd657-tr88v\" (UID: \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\") " pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.383761 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-serving-cert\") pod \"route-controller-manager-5c554bd657-tr88v\" (UID: \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\") " pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.387095 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-646ds\" (UniqueName: \"kubernetes.io/projected/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-kube-api-access-646ds\") pod \"route-controller-manager-5c554bd657-tr88v\" (UID: \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\") " pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" Feb 16 15:10:27 crc kubenswrapper[4835]: I0216 15:10:27.456047 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" Feb 16 15:10:28 crc kubenswrapper[4835]: I0216 15:10:28.014743 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" Feb 16 15:10:28 crc kubenswrapper[4835]: I0216 15:10:28.014732 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv" event={"ID":"e964e5a5-fe00-4d83-8416-2e2bd64c359d","Type":"ContainerDied","Data":"1e2ff8e9321719e029e3300a15adcd3e0ae1cdccaf5aeca8b4d8b904f2c4cfd2"} Feb 16 15:10:28 crc kubenswrapper[4835]: I0216 15:10:28.014869 4835 scope.go:117] "RemoveContainer" containerID="b61f118f70c50f66754a126dee0ea5e57875b369b31a3605296cfeda141ac611" Feb 16 15:10:28 crc kubenswrapper[4835]: I0216 15:10:28.016710 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" event={"ID":"41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27","Type":"ContainerDied","Data":"36bc4afbe20afde4928da465e732f4f937c5059c057d46157d7760dc2e973bc5"} Feb 16 15:10:28 crc kubenswrapper[4835]: I0216 15:10:28.016771 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-swn24" Feb 16 15:10:28 crc kubenswrapper[4835]: I0216 15:10:28.056407 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv"] Feb 16 15:10:28 crc kubenswrapper[4835]: I0216 15:10:28.062640 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-n4srv"] Feb 16 15:10:28 crc kubenswrapper[4835]: I0216 15:10:28.075308 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-swn24"] Feb 16 15:10:28 crc kubenswrapper[4835]: I0216 15:10:28.079735 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-swn24"] Feb 16 15:10:29 crc kubenswrapper[4835]: I0216 15:10:29.390774 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27" path="/var/lib/kubelet/pods/41f358f5-64f6-4b43-b8d2-4a7f5ffbcf27/volumes" Feb 16 15:10:29 crc kubenswrapper[4835]: I0216 15:10:29.391994 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e964e5a5-fe00-4d83-8416-2e2bd64c359d" path="/var/lib/kubelet/pods/e964e5a5-fe00-4d83-8416-2e2bd64c359d/volumes" Feb 16 15:10:29 crc kubenswrapper[4835]: I0216 15:10:29.779872 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7599458999-k8d5h"] Feb 16 15:10:29 crc kubenswrapper[4835]: I0216 15:10:29.781414 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:29 crc kubenswrapper[4835]: I0216 15:10:29.786631 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 15:10:29 crc kubenswrapper[4835]: I0216 15:10:29.786749 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 15:10:29 crc kubenswrapper[4835]: I0216 15:10:29.786825 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 15:10:29 crc kubenswrapper[4835]: I0216 15:10:29.787207 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 15:10:29 crc kubenswrapper[4835]: I0216 15:10:29.787476 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 15:10:29 crc kubenswrapper[4835]: I0216 15:10:29.787701 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 15:10:29 crc kubenswrapper[4835]: I0216 15:10:29.794598 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7599458999-k8d5h"] Feb 16 15:10:29 crc kubenswrapper[4835]: I0216 15:10:29.796357 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 15:10:29 crc kubenswrapper[4835]: I0216 15:10:29.914313 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-config\") pod \"controller-manager-7599458999-k8d5h\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:29 crc kubenswrapper[4835]: I0216 15:10:29.914416 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-serving-cert\") pod \"controller-manager-7599458999-k8d5h\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:29 crc kubenswrapper[4835]: I0216 15:10:29.914671 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkwp6\" (UniqueName: \"kubernetes.io/projected/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-kube-api-access-bkwp6\") pod \"controller-manager-7599458999-k8d5h\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:29 crc kubenswrapper[4835]: I0216 15:10:29.914862 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-proxy-ca-bundles\") pod \"controller-manager-7599458999-k8d5h\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:29 crc kubenswrapper[4835]: I0216 15:10:29.914970 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-client-ca\") pod \"controller-manager-7599458999-k8d5h\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:30 crc kubenswrapper[4835]: I0216 15:10:30.017909 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-config\") pod \"controller-manager-7599458999-k8d5h\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:30 crc kubenswrapper[4835]: I0216 15:10:30.018103 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-serving-cert\") pod \"controller-manager-7599458999-k8d5h\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:30 crc kubenswrapper[4835]: I0216 15:10:30.018180 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkwp6\" (UniqueName: \"kubernetes.io/projected/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-kube-api-access-bkwp6\") pod \"controller-manager-7599458999-k8d5h\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:30 crc kubenswrapper[4835]: I0216 15:10:30.018237 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-proxy-ca-bundles\") pod \"controller-manager-7599458999-k8d5h\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:30 crc kubenswrapper[4835]: I0216 15:10:30.018306 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-client-ca\") pod \"controller-manager-7599458999-k8d5h\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:30 crc kubenswrapper[4835]: I0216 15:10:30.020006 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-proxy-ca-bundles\") pod \"controller-manager-7599458999-k8d5h\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:30 crc kubenswrapper[4835]: I0216 15:10:30.021472 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-client-ca\") pod \"controller-manager-7599458999-k8d5h\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:30 crc kubenswrapper[4835]: I0216 15:10:30.031697 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-serving-cert\") pod \"controller-manager-7599458999-k8d5h\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:30 crc kubenswrapper[4835]: I0216 15:10:30.048238 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkwp6\" (UniqueName: \"kubernetes.io/projected/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-kube-api-access-bkwp6\") pod \"controller-manager-7599458999-k8d5h\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:30 crc kubenswrapper[4835]: I0216 15:10:30.137966 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-config\") pod \"controller-manager-7599458999-k8d5h\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:30 crc kubenswrapper[4835]: I0216 15:10:30.416005 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:31 crc kubenswrapper[4835]: E0216 15:10:31.095616 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 15:10:31 crc kubenswrapper[4835]: E0216 15:10:31.096132 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flxvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qlnj4_openshift-marketplace(76be94b7-5f32-478a-81a6-51758b5f7280): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 15:10:31 crc kubenswrapper[4835]: E0216 15:10:31.097355 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-qlnj4" podUID="76be94b7-5f32-478a-81a6-51758b5f7280" Feb 16 15:10:31 crc kubenswrapper[4835]: E0216 15:10:31.125568 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 15:10:31 crc kubenswrapper[4835]: E0216 15:10:31.126037 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mtxr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-98n7v_openshift-marketplace(61564e44-b4e6-4a57-9232-3403b0173aa6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 15:10:31 crc kubenswrapper[4835]: E0216 15:10:31.127294 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-98n7v" podUID="61564e44-b4e6-4a57-9232-3403b0173aa6" Feb 16 15:10:32 crc kubenswrapper[4835]: E0216 15:10:32.703032 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qlnj4" podUID="76be94b7-5f32-478a-81a6-51758b5f7280" Feb 16 15:10:32 crc kubenswrapper[4835]: E0216 15:10:32.703333 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-98n7v" podUID="61564e44-b4e6-4a57-9232-3403b0173aa6" Feb 16 15:10:32 crc kubenswrapper[4835]: I0216 15:10:32.893829 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r5clm" Feb 16 15:10:34 crc kubenswrapper[4835]: I0216 15:10:34.184433 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7599458999-k8d5h"] Feb 16 15:10:34 crc kubenswrapper[4835]: I0216 15:10:34.283101 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v"] Feb 16 15:10:37 crc kubenswrapper[4835]: I0216 15:10:37.360289 4835 scope.go:117] "RemoveContainer" containerID="fa8c8a61e4f61a4292a1c63f57132752a6f48c5889cef9c968819fe8bc9c9b12" Feb 16 15:10:37 crc kubenswrapper[4835]: E0216 15:10:37.372190 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 16 15:10:37 crc kubenswrapper[4835]: E0216 15:10:37.372319 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mr72v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2j7l9_openshift-marketplace(2498fe6c-9af0-4225-8450-558085a67825): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 15:10:37 crc kubenswrapper[4835]: E0216 15:10:37.373787 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2j7l9" podUID="2498fe6c-9af0-4225-8450-558085a67825" Feb 16 15:10:37 crc kubenswrapper[4835]: E0216 15:10:37.417467 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 16 15:10:37 crc kubenswrapper[4835]: E0216 15:10:37.417666 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2m9mp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-kxxft_openshift-marketplace(c7361241-f3c4-483a-9aa8-d1af72ab348b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 15:10:37 crc kubenswrapper[4835]: E0216 15:10:37.419693 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-kxxft" podUID="c7361241-f3c4-483a-9aa8-d1af72ab348b" Feb 16 15:10:37 crc kubenswrapper[4835]: E0216 15:10:37.493238 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 16 15:10:37 crc kubenswrapper[4835]: E0216 15:10:37.493419 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fq7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2blp8_openshift-marketplace(822c5e9d-78fa-4c80-b3f4-e3a0310020a2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 15:10:37 crc kubenswrapper[4835]: E0216 15:10:37.494680 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2blp8" podUID="822c5e9d-78fa-4c80-b3f4-e3a0310020a2" Feb 16 15:10:37 crc kubenswrapper[4835]: I0216 15:10:37.613917 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-b5nkt"] Feb 16 15:10:37 crc kubenswrapper[4835]: I0216 15:10:37.640839 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7599458999-k8d5h"] Feb 16 15:10:37 crc kubenswrapper[4835]: I0216 15:10:37.687186 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v"] Feb 16 15:10:37 crc kubenswrapper[4835]: W0216 15:10:37.690218 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1ebf1d4_a23b_4556_8b83_c3b48ca3409b.slice/crio-0e8eac0d7f55a32358a822346f8b6ff3bcb245f7120c2404d8965be48e31b190 WatchSource:0}: Error finding container 0e8eac0d7f55a32358a822346f8b6ff3bcb245f7120c2404d8965be48e31b190: Status 404 returned error can't find the container with id 0e8eac0d7f55a32358a822346f8b6ff3bcb245f7120c2404d8965be48e31b190 Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.077646 4835 generic.go:334] "Generic (PLEG): container finished" podID="e1fe6dd1-829b-4120-8585-040e9032f292" containerID="db0f7e65bc107fff87939f1caa32038ad18c78686fcc1e19a2f96f28e5ebde92" exitCode=0 Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.077711 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm95t" event={"ID":"e1fe6dd1-829b-4120-8585-040e9032f292","Type":"ContainerDied","Data":"db0f7e65bc107fff87939f1caa32038ad18c78686fcc1e19a2f96f28e5ebde92"} Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.079823 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" event={"ID":"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b","Type":"ContainerStarted","Data":"520f06d709a0f4e3bf6c901e202af4b4ca9e0b3ae9a0727bc90cd90ac6fbd5c6"} Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.079844 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" podUID="a1ebf1d4-a23b-4556-8b83-c3b48ca3409b" containerName="route-controller-manager" containerID="cri-o://520f06d709a0f4e3bf6c901e202af4b4ca9e0b3ae9a0727bc90cd90ac6fbd5c6" gracePeriod=30 Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.079869 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" event={"ID":"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b","Type":"ContainerStarted","Data":"0e8eac0d7f55a32358a822346f8b6ff3bcb245f7120c2404d8965be48e31b190"} Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.079889 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.082883 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" event={"ID":"531b1860-3fe4-41f5-91c2-a30e7e71ff5e","Type":"ContainerStarted","Data":"e1602f9fee9e1ff8509b2a1bc88c27e9ac9636e3afe330c54290e6ced70fb4ea"} Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.082917 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" podUID="531b1860-3fe4-41f5-91c2-a30e7e71ff5e" containerName="controller-manager" containerID="cri-o://e1602f9fee9e1ff8509b2a1bc88c27e9ac9636e3afe330c54290e6ced70fb4ea" gracePeriod=30 Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.082928 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" event={"ID":"531b1860-3fe4-41f5-91c2-a30e7e71ff5e","Type":"ContainerStarted","Data":"5ab3edc3567348a1ca9ee44d21cd58c87e47a69c26d25613fb9b515834df874a"} Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.083096 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.086168 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092","Type":"ContainerStarted","Data":"d865fec9a65bd56d6fc60ee25e48afadb6693ca0a42f612da305610ce3ce5595"} Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.089615 4835 generic.go:334] "Generic (PLEG): container finished" podID="b5901620-08e0-4ade-974d-e8c241526ff1" containerID="8489373486a80067aaeaac7bf5fa987d68c2ac77c5b192c51b3a32dfb8da90d8" exitCode=0 Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.089661 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rzr6" event={"ID":"b5901620-08e0-4ade-974d-e8c241526ff1","Type":"ContainerDied","Data":"8489373486a80067aaeaac7bf5fa987d68c2ac77c5b192c51b3a32dfb8da90d8"} Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.098316 4835 patch_prober.go:28] interesting pod/controller-manager-7599458999-k8d5h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:34772->10.217.0.55:8443: read: connection reset by peer" start-of-body= Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.098370 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" podUID="531b1860-3fe4-41f5-91c2-a30e7e71ff5e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:34772->10.217.0.55:8443: read: connection reset by peer" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.105717 4835 generic.go:334] "Generic (PLEG): container finished" podID="0132c288-a83e-4f3c-b620-4cac59f56df9" containerID="5786357e55c9d23999cd87e8fdb48761c210b1de195282ab661bc1da7db1569f" exitCode=0 Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.105794 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgqwv" event={"ID":"0132c288-a83e-4f3c-b620-4cac59f56df9","Type":"ContainerDied","Data":"5786357e55c9d23999cd87e8fdb48761c210b1de195282ab661bc1da7db1569f"} Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.115870 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" event={"ID":"5121c96d-796f-46b5-8889-b7e74c329b2f","Type":"ContainerStarted","Data":"bf01341bbb1e8d7d8b6164b4065b63cf82826a4947d2a6da3ec7a97549d6748e"} Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.115944 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" event={"ID":"5121c96d-796f-46b5-8889-b7e74c329b2f","Type":"ContainerStarted","Data":"0a411ed4119557d0479ebc474cf1bdfb2a7f6a5632461a015af6ab5954b6b5d0"} Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.116281 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=29.116262425 podStartE2EDuration="29.116262425s" podCreationTimestamp="2026-02-16 15:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:10:38.110747733 +0000 UTC m=+187.402740628" watchObservedRunningTime="2026-02-16 15:10:38.116262425 +0000 UTC m=+187.408255320" Feb 16 15:10:38 crc kubenswrapper[4835]: E0216 15:10:38.117555 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2blp8" podUID="822c5e9d-78fa-4c80-b3f4-e3a0310020a2" Feb 16 15:10:38 crc kubenswrapper[4835]: E0216 15:10:38.118046 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2j7l9" podUID="2498fe6c-9af0-4225-8450-558085a67825" Feb 16 15:10:38 crc kubenswrapper[4835]: E0216 15:10:38.118953 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-kxxft" podUID="c7361241-f3c4-483a-9aa8-d1af72ab348b" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.162435 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" podStartSLOduration=24.162417916 podStartE2EDuration="24.162417916s" podCreationTimestamp="2026-02-16 15:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:10:38.160558655 +0000 UTC m=+187.452551540" watchObservedRunningTime="2026-02-16 15:10:38.162417916 +0000 UTC m=+187.454410811" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.164851 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" podStartSLOduration=24.164841063 podStartE2EDuration="24.164841063s" podCreationTimestamp="2026-02-16 15:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:10:38.145258833 +0000 UTC m=+187.437251728" watchObservedRunningTime="2026-02-16 15:10:38.164841063 +0000 UTC m=+187.456833958" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.251472 4835 patch_prober.go:28] interesting pod/route-controller-manager-5c554bd657-tr88v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:32774->10.217.0.54:8443: read: connection reset by peer" start-of-body= Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.251542 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" podUID="a1ebf1d4-a23b-4556-8b83-c3b48ca3409b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:32774->10.217.0.54:8443: read: connection reset by peer" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.418156 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.443365 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-config\") pod \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.443756 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkwp6\" (UniqueName: \"kubernetes.io/projected/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-kube-api-access-bkwp6\") pod \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.443776 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-serving-cert\") pod \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.443796 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-client-ca\") pod \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.443852 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-proxy-ca-bundles\") pod \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\" (UID: \"531b1860-3fe4-41f5-91c2-a30e7e71ff5e\") " Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.444807 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-config" (OuterVolumeSpecName: "config") pod "531b1860-3fe4-41f5-91c2-a30e7e71ff5e" (UID: "531b1860-3fe4-41f5-91c2-a30e7e71ff5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.444928 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-client-ca" (OuterVolumeSpecName: "client-ca") pod "531b1860-3fe4-41f5-91c2-a30e7e71ff5e" (UID: "531b1860-3fe4-41f5-91c2-a30e7e71ff5e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.451464 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "531b1860-3fe4-41f5-91c2-a30e7e71ff5e" (UID: "531b1860-3fe4-41f5-91c2-a30e7e71ff5e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.454055 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "531b1860-3fe4-41f5-91c2-a30e7e71ff5e" (UID: "531b1860-3fe4-41f5-91c2-a30e7e71ff5e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.462561 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-kube-api-access-bkwp6" (OuterVolumeSpecName: "kube-api-access-bkwp6") pod "531b1860-3fe4-41f5-91c2-a30e7e71ff5e" (UID: "531b1860-3fe4-41f5-91c2-a30e7e71ff5e"). InnerVolumeSpecName "kube-api-access-bkwp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.468176 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59c6b7c975-4w4gz"] Feb 16 15:10:38 crc kubenswrapper[4835]: E0216 15:10:38.475515 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="531b1860-3fe4-41f5-91c2-a30e7e71ff5e" containerName="controller-manager" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.475573 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="531b1860-3fe4-41f5-91c2-a30e7e71ff5e" containerName="controller-manager" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.475844 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="531b1860-3fe4-41f5-91c2-a30e7e71ff5e" containerName="controller-manager" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.476435 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.522712 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59c6b7c975-4w4gz"] Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.537688 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5c554bd657-tr88v_a1ebf1d4-a23b-4556-8b83-c3b48ca3409b/route-controller-manager/0.log" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.537754 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.544473 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-serving-cert\") pod \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\" (UID: \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\") " Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.544565 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-646ds\" (UniqueName: \"kubernetes.io/projected/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-kube-api-access-646ds\") pod \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\" (UID: \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\") " Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.544611 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-config\") pod \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\" (UID: \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\") " Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.544660 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-client-ca\") pod \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\" (UID: \"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b\") " Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.544751 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-client-ca\") pod \"controller-manager-59c6b7c975-4w4gz\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.544784 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbkx9\" (UniqueName: \"kubernetes.io/projected/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-kube-api-access-zbkx9\") pod \"controller-manager-59c6b7c975-4w4gz\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.544810 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-proxy-ca-bundles\") pod \"controller-manager-59c6b7c975-4w4gz\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.544854 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-serving-cert\") pod \"controller-manager-59c6b7c975-4w4gz\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.544872 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-config\") pod \"controller-manager-59c6b7c975-4w4gz\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.544918 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.544929 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.544938 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkwp6\" (UniqueName: \"kubernetes.io/projected/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-kube-api-access-bkwp6\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.544948 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.544956 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/531b1860-3fe4-41f5-91c2-a30e7e71ff5e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.548298 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-client-ca" (OuterVolumeSpecName: "client-ca") pod "a1ebf1d4-a23b-4556-8b83-c3b48ca3409b" (UID: "a1ebf1d4-a23b-4556-8b83-c3b48ca3409b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.548426 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-config" (OuterVolumeSpecName: "config") pod "a1ebf1d4-a23b-4556-8b83-c3b48ca3409b" (UID: "a1ebf1d4-a23b-4556-8b83-c3b48ca3409b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.569773 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-kube-api-access-646ds" (OuterVolumeSpecName: "kube-api-access-646ds") pod "a1ebf1d4-a23b-4556-8b83-c3b48ca3409b" (UID: "a1ebf1d4-a23b-4556-8b83-c3b48ca3409b"). InnerVolumeSpecName "kube-api-access-646ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.578611 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ztcpg"] Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.582172 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a1ebf1d4-a23b-4556-8b83-c3b48ca3409b" (UID: "a1ebf1d4-a23b-4556-8b83-c3b48ca3409b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.648064 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbkx9\" (UniqueName: \"kubernetes.io/projected/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-kube-api-access-zbkx9\") pod \"controller-manager-59c6b7c975-4w4gz\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.648121 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-proxy-ca-bundles\") pod \"controller-manager-59c6b7c975-4w4gz\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.648179 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-serving-cert\") pod \"controller-manager-59c6b7c975-4w4gz\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.648200 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-config\") pod \"controller-manager-59c6b7c975-4w4gz\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.648231 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-client-ca\") pod \"controller-manager-59c6b7c975-4w4gz\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.648265 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.648276 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.648288 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.648306 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-646ds\" (UniqueName: \"kubernetes.io/projected/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b-kube-api-access-646ds\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.649182 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-proxy-ca-bundles\") pod \"controller-manager-59c6b7c975-4w4gz\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.649504 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-config\") pod \"controller-manager-59c6b7c975-4w4gz\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.653163 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-client-ca\") pod \"controller-manager-59c6b7c975-4w4gz\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.655564 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-serving-cert\") pod \"controller-manager-59c6b7c975-4w4gz\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.665463 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbkx9\" (UniqueName: \"kubernetes.io/projected/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-kube-api-access-zbkx9\") pod \"controller-manager-59c6b7c975-4w4gz\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.834420 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:38 crc kubenswrapper[4835]: I0216 15:10:38.992425 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59c6b7c975-4w4gz"] Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.124139 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm95t" event={"ID":"e1fe6dd1-829b-4120-8585-040e9032f292","Type":"ContainerStarted","Data":"36e05c6d84696ead0dc55c5d450f4a25aa42a417b444db2a4d13d37332423c63"} Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.125908 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5c554bd657-tr88v_a1ebf1d4-a23b-4556-8b83-c3b48ca3409b/route-controller-manager/0.log" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.125939 4835 generic.go:334] "Generic (PLEG): container finished" podID="a1ebf1d4-a23b-4556-8b83-c3b48ca3409b" containerID="520f06d709a0f4e3bf6c901e202af4b4ca9e0b3ae9a0727bc90cd90ac6fbd5c6" exitCode=255 Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.126014 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.127769 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" event={"ID":"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b","Type":"ContainerDied","Data":"520f06d709a0f4e3bf6c901e202af4b4ca9e0b3ae9a0727bc90cd90ac6fbd5c6"} Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.127817 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v" event={"ID":"a1ebf1d4-a23b-4556-8b83-c3b48ca3409b","Type":"ContainerDied","Data":"0e8eac0d7f55a32358a822346f8b6ff3bcb245f7120c2404d8965be48e31b190"} Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.127835 4835 scope.go:117] "RemoveContainer" containerID="520f06d709a0f4e3bf6c901e202af4b4ca9e0b3ae9a0727bc90cd90ac6fbd5c6" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.139304 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgqwv" event={"ID":"0132c288-a83e-4f3c-b620-4cac59f56df9","Type":"ContainerStarted","Data":"60430094d241d4b56c1c00284c0e1bc7dfa0f0c60df42446bc5049be1eb80574"} Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.142312 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" event={"ID":"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9","Type":"ContainerStarted","Data":"200bc50ae954c0975804274c7fa9c0fe563c8b7e090d56e807ea3ced0d3a9806"} Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.142351 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" event={"ID":"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9","Type":"ContainerStarted","Data":"8c1a93964635c0ca7d683cb9af0a0434f6041b8cd282db5e2f2b72598aaaa12f"} Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.143073 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.144834 4835 patch_prober.go:28] interesting pod/controller-manager-59c6b7c975-4w4gz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.144870 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" podUID="1bcd7c7d-c134-476a-b6e7-f5120dae4eb9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.146078 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-b5nkt" event={"ID":"5121c96d-796f-46b5-8889-b7e74c329b2f","Type":"ContainerStarted","Data":"209dff9dae68a79c455b4aed34a9f9ce5df7f44f03430e3682063a0387dc9060"} Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.148070 4835 generic.go:334] "Generic (PLEG): container finished" podID="531b1860-3fe4-41f5-91c2-a30e7e71ff5e" containerID="e1602f9fee9e1ff8509b2a1bc88c27e9ac9636e3afe330c54290e6ced70fb4ea" exitCode=0 Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.148117 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" event={"ID":"531b1860-3fe4-41f5-91c2-a30e7e71ff5e","Type":"ContainerDied","Data":"e1602f9fee9e1ff8509b2a1bc88c27e9ac9636e3afe330c54290e6ced70fb4ea"} Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.148131 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" event={"ID":"531b1860-3fe4-41f5-91c2-a30e7e71ff5e","Type":"ContainerDied","Data":"5ab3edc3567348a1ca9ee44d21cd58c87e47a69c26d25613fb9b515834df874a"} Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.148174 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7599458999-k8d5h" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.157328 4835 scope.go:117] "RemoveContainer" containerID="520f06d709a0f4e3bf6c901e202af4b4ca9e0b3ae9a0727bc90cd90ac6fbd5c6" Feb 16 15:10:39 crc kubenswrapper[4835]: E0216 15:10:39.158215 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"520f06d709a0f4e3bf6c901e202af4b4ca9e0b3ae9a0727bc90cd90ac6fbd5c6\": container with ID starting with 520f06d709a0f4e3bf6c901e202af4b4ca9e0b3ae9a0727bc90cd90ac6fbd5c6 not found: ID does not exist" containerID="520f06d709a0f4e3bf6c901e202af4b4ca9e0b3ae9a0727bc90cd90ac6fbd5c6" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.158360 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"520f06d709a0f4e3bf6c901e202af4b4ca9e0b3ae9a0727bc90cd90ac6fbd5c6"} err="failed to get container status \"520f06d709a0f4e3bf6c901e202af4b4ca9e0b3ae9a0727bc90cd90ac6fbd5c6\": rpc error: code = NotFound desc = could not find container \"520f06d709a0f4e3bf6c901e202af4b4ca9e0b3ae9a0727bc90cd90ac6fbd5c6\": container with ID starting with 520f06d709a0f4e3bf6c901e202af4b4ca9e0b3ae9a0727bc90cd90ac6fbd5c6 not found: ID does not exist" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.158397 4835 scope.go:117] "RemoveContainer" containerID="e1602f9fee9e1ff8509b2a1bc88c27e9ac9636e3afe330c54290e6ced70fb4ea" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.160372 4835 generic.go:334] "Generic (PLEG): container finished" podID="e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092" containerID="d865fec9a65bd56d6fc60ee25e48afadb6693ca0a42f612da305610ce3ce5595" exitCode=0 Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.160432 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092","Type":"ContainerDied","Data":"d865fec9a65bd56d6fc60ee25e48afadb6693ca0a42f612da305610ce3ce5595"} Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.166177 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wm95t" podStartSLOduration=3.023700667 podStartE2EDuration="40.166155844s" podCreationTimestamp="2026-02-16 15:09:59 +0000 UTC" firstStartedPulling="2026-02-16 15:10:01.627269207 +0000 UTC m=+150.919262112" lastFinishedPulling="2026-02-16 15:10:38.769724404 +0000 UTC m=+188.061717289" observedRunningTime="2026-02-16 15:10:39.151773948 +0000 UTC m=+188.443766843" watchObservedRunningTime="2026-02-16 15:10:39.166155844 +0000 UTC m=+188.458148739" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.167696 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v"] Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.169746 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c554bd657-tr88v"] Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.171726 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rzr6" event={"ID":"b5901620-08e0-4ade-974d-e8c241526ff1","Type":"ContainerStarted","Data":"3a843a475860b346804d6e9ea7b806cf56524096429d23d43fd04229a7ee2db5"} Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.185246 4835 scope.go:117] "RemoveContainer" containerID="e1602f9fee9e1ff8509b2a1bc88c27e9ac9636e3afe330c54290e6ced70fb4ea" Feb 16 15:10:39 crc kubenswrapper[4835]: E0216 15:10:39.185890 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1602f9fee9e1ff8509b2a1bc88c27e9ac9636e3afe330c54290e6ced70fb4ea\": container with ID starting with e1602f9fee9e1ff8509b2a1bc88c27e9ac9636e3afe330c54290e6ced70fb4ea not found: ID does not exist" containerID="e1602f9fee9e1ff8509b2a1bc88c27e9ac9636e3afe330c54290e6ced70fb4ea" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.185934 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1602f9fee9e1ff8509b2a1bc88c27e9ac9636e3afe330c54290e6ced70fb4ea"} err="failed to get container status \"e1602f9fee9e1ff8509b2a1bc88c27e9ac9636e3afe330c54290e6ced70fb4ea\": rpc error: code = NotFound desc = could not find container \"e1602f9fee9e1ff8509b2a1bc88c27e9ac9636e3afe330c54290e6ced70fb4ea\": container with ID starting with e1602f9fee9e1ff8509b2a1bc88c27e9ac9636e3afe330c54290e6ced70fb4ea not found: ID does not exist" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.188922 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" podStartSLOduration=5.18890887 podStartE2EDuration="5.18890887s" podCreationTimestamp="2026-02-16 15:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:10:39.186298848 +0000 UTC m=+188.478291743" watchObservedRunningTime="2026-02-16 15:10:39.18890887 +0000 UTC m=+188.480901765" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.219847 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-b5nkt" podStartSLOduration=168.219829202 podStartE2EDuration="2m48.219829202s" podCreationTimestamp="2026-02-16 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:10:39.21938667 +0000 UTC m=+188.511379565" watchObservedRunningTime="2026-02-16 15:10:39.219829202 +0000 UTC m=+188.511822087" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.221893 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qgqwv" podStartSLOduration=2.302853957 podStartE2EDuration="37.221886609s" podCreationTimestamp="2026-02-16 15:10:02 +0000 UTC" firstStartedPulling="2026-02-16 15:10:03.653840509 +0000 UTC m=+152.945833404" lastFinishedPulling="2026-02-16 15:10:38.572873171 +0000 UTC m=+187.864866056" observedRunningTime="2026-02-16 15:10:39.208036937 +0000 UTC m=+188.500029842" watchObservedRunningTime="2026-02-16 15:10:39.221886609 +0000 UTC m=+188.513879504" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.262427 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6rzr6" podStartSLOduration=3.308120647 podStartE2EDuration="37.262399705s" podCreationTimestamp="2026-02-16 15:10:02 +0000 UTC" firstStartedPulling="2026-02-16 15:10:04.708457399 +0000 UTC m=+154.000450294" lastFinishedPulling="2026-02-16 15:10:38.662736457 +0000 UTC m=+187.954729352" observedRunningTime="2026-02-16 15:10:39.25387586 +0000 UTC m=+188.545868755" watchObservedRunningTime="2026-02-16 15:10:39.262399705 +0000 UTC m=+188.554392610" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.265026 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7599458999-k8d5h"] Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.267796 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7599458999-k8d5h"] Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.384439 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="531b1860-3fe4-41f5-91c2-a30e7e71ff5e" path="/var/lib/kubelet/pods/531b1860-3fe4-41f5-91c2-a30e7e71ff5e/volumes" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.384954 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1ebf1d4-a23b-4556-8b83-c3b48ca3409b" path="/var/lib/kubelet/pods/a1ebf1d4-a23b-4556-8b83-c3b48ca3409b/volumes" Feb 16 15:10:39 crc kubenswrapper[4835]: I0216 15:10:39.618902 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.183134 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.191182 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wm95t" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.191319 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wm95t" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.318414 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 15:10:40 crc kubenswrapper[4835]: E0216 15:10:40.318619 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ebf1d4-a23b-4556-8b83-c3b48ca3409b" containerName="route-controller-manager" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.318630 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ebf1d4-a23b-4556-8b83-c3b48ca3409b" containerName="route-controller-manager" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.318725 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ebf1d4-a23b-4556-8b83-c3b48ca3409b" containerName="route-controller-manager" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.319034 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.329480 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.368918 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4cf4baa-96a9-4e95-85a3-67418ff831ca-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b4cf4baa-96a9-4e95-85a3-67418ff831ca\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.369028 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4cf4baa-96a9-4e95-85a3-67418ff831ca-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b4cf4baa-96a9-4e95-85a3-67418ff831ca\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.439630 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.470265 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4cf4baa-96a9-4e95-85a3-67418ff831ca-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b4cf4baa-96a9-4e95-85a3-67418ff831ca\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.470371 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4cf4baa-96a9-4e95-85a3-67418ff831ca-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b4cf4baa-96a9-4e95-85a3-67418ff831ca\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.470442 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4cf4baa-96a9-4e95-85a3-67418ff831ca-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b4cf4baa-96a9-4e95-85a3-67418ff831ca\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.496007 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4cf4baa-96a9-4e95-85a3-67418ff831ca-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b4cf4baa-96a9-4e95-85a3-67418ff831ca\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.571825 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092-kubelet-dir\") pod \"e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092\" (UID: \"e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092\") " Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.571956 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092-kube-api-access\") pod \"e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092\" (UID: \"e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092\") " Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.571966 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092" (UID: "e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.572193 4835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.574695 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092" (UID: "e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.643335 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.673400 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.773407 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r"] Feb 16 15:10:40 crc kubenswrapper[4835]: E0216 15:10:40.773693 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092" containerName="pruner" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.773708 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092" containerName="pruner" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.773829 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092" containerName="pruner" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.774263 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.780565 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.780706 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.780819 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.780844 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.780987 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.781137 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.794946 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r"] Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.876845 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rpcb\" (UniqueName: \"kubernetes.io/projected/14c6a800-2915-4a09-81e2-191f3ee9551e-kube-api-access-7rpcb\") pod \"route-controller-manager-69fcd7bf85-tlx9r\" (UID: \"14c6a800-2915-4a09-81e2-191f3ee9551e\") " pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.876994 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14c6a800-2915-4a09-81e2-191f3ee9551e-serving-cert\") pod \"route-controller-manager-69fcd7bf85-tlx9r\" (UID: \"14c6a800-2915-4a09-81e2-191f3ee9551e\") " pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.877161 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c6a800-2915-4a09-81e2-191f3ee9551e-config\") pod \"route-controller-manager-69fcd7bf85-tlx9r\" (UID: \"14c6a800-2915-4a09-81e2-191f3ee9551e\") " pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.877322 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14c6a800-2915-4a09-81e2-191f3ee9551e-client-ca\") pod \"route-controller-manager-69fcd7bf85-tlx9r\" (UID: \"14c6a800-2915-4a09-81e2-191f3ee9551e\") " pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.978120 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rpcb\" (UniqueName: \"kubernetes.io/projected/14c6a800-2915-4a09-81e2-191f3ee9551e-kube-api-access-7rpcb\") pod \"route-controller-manager-69fcd7bf85-tlx9r\" (UID: \"14c6a800-2915-4a09-81e2-191f3ee9551e\") " pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.978197 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14c6a800-2915-4a09-81e2-191f3ee9551e-serving-cert\") pod \"route-controller-manager-69fcd7bf85-tlx9r\" (UID: \"14c6a800-2915-4a09-81e2-191f3ee9551e\") " pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.978239 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c6a800-2915-4a09-81e2-191f3ee9551e-config\") pod \"route-controller-manager-69fcd7bf85-tlx9r\" (UID: \"14c6a800-2915-4a09-81e2-191f3ee9551e\") " pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.978268 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14c6a800-2915-4a09-81e2-191f3ee9551e-client-ca\") pod \"route-controller-manager-69fcd7bf85-tlx9r\" (UID: \"14c6a800-2915-4a09-81e2-191f3ee9551e\") " pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.979325 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14c6a800-2915-4a09-81e2-191f3ee9551e-client-ca\") pod \"route-controller-manager-69fcd7bf85-tlx9r\" (UID: \"14c6a800-2915-4a09-81e2-191f3ee9551e\") " pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.979617 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c6a800-2915-4a09-81e2-191f3ee9551e-config\") pod \"route-controller-manager-69fcd7bf85-tlx9r\" (UID: \"14c6a800-2915-4a09-81e2-191f3ee9551e\") " pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.983592 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14c6a800-2915-4a09-81e2-191f3ee9551e-serving-cert\") pod \"route-controller-manager-69fcd7bf85-tlx9r\" (UID: \"14c6a800-2915-4a09-81e2-191f3ee9551e\") " pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:40 crc kubenswrapper[4835]: I0216 15:10:40.993345 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rpcb\" (UniqueName: \"kubernetes.io/projected/14c6a800-2915-4a09-81e2-191f3ee9551e-kube-api-access-7rpcb\") pod \"route-controller-manager-69fcd7bf85-tlx9r\" (UID: \"14c6a800-2915-4a09-81e2-191f3ee9551e\") " pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:41 crc kubenswrapper[4835]: I0216 15:10:41.059064 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 15:10:41 crc kubenswrapper[4835]: W0216 15:10:41.071263 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb4cf4baa_96a9_4e95_85a3_67418ff831ca.slice/crio-b77a4b5608b54e3bd44eef3467d8edb8e38d5bd86095f4d0369cf5e860473f0b WatchSource:0}: Error finding container b77a4b5608b54e3bd44eef3467d8edb8e38d5bd86095f4d0369cf5e860473f0b: Status 404 returned error can't find the container with id b77a4b5608b54e3bd44eef3467d8edb8e38d5bd86095f4d0369cf5e860473f0b Feb 16 15:10:41 crc kubenswrapper[4835]: I0216 15:10:41.097005 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:41 crc kubenswrapper[4835]: I0216 15:10:41.198465 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 15:10:41 crc kubenswrapper[4835]: I0216 15:10:41.198489 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e4f7eb2d-d2cb-4ef3-aef8-265b1e4bf092","Type":"ContainerDied","Data":"e008570f30860888e7b92a572fefd3d23acbe61e86d41fcb05085a2f4e6d6593"} Feb 16 15:10:41 crc kubenswrapper[4835]: I0216 15:10:41.198852 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e008570f30860888e7b92a572fefd3d23acbe61e86d41fcb05085a2f4e6d6593" Feb 16 15:10:41 crc kubenswrapper[4835]: I0216 15:10:41.199947 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b4cf4baa-96a9-4e95-85a3-67418ff831ca","Type":"ContainerStarted","Data":"b77a4b5608b54e3bd44eef3467d8edb8e38d5bd86095f4d0369cf5e860473f0b"} Feb 16 15:10:41 crc kubenswrapper[4835]: I0216 15:10:41.376430 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wm95t" podUID="e1fe6dd1-829b-4120-8585-040e9032f292" containerName="registry-server" probeResult="failure" output=< Feb 16 15:10:41 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Feb 16 15:10:41 crc kubenswrapper[4835]: > Feb 16 15:10:41 crc kubenswrapper[4835]: I0216 15:10:41.475725 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r"] Feb 16 15:10:41 crc kubenswrapper[4835]: W0216 15:10:41.480351 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14c6a800_2915_4a09_81e2_191f3ee9551e.slice/crio-12b34c3aa7c2af446e3fc433ded968f70f4bc24d03caf31822a6e51ea75c938c WatchSource:0}: Error finding container 12b34c3aa7c2af446e3fc433ded968f70f4bc24d03caf31822a6e51ea75c938c: Status 404 returned error can't find the container with id 12b34c3aa7c2af446e3fc433ded968f70f4bc24d03caf31822a6e51ea75c938c Feb 16 15:10:42 crc kubenswrapper[4835]: I0216 15:10:42.205694 4835 generic.go:334] "Generic (PLEG): container finished" podID="b4cf4baa-96a9-4e95-85a3-67418ff831ca" containerID="68b04ea5f10fec264e3e4dcb4082b89efd48065c3f42d433bff6cd997e819184" exitCode=0 Feb 16 15:10:42 crc kubenswrapper[4835]: I0216 15:10:42.205795 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b4cf4baa-96a9-4e95-85a3-67418ff831ca","Type":"ContainerDied","Data":"68b04ea5f10fec264e3e4dcb4082b89efd48065c3f42d433bff6cd997e819184"} Feb 16 15:10:42 crc kubenswrapper[4835]: I0216 15:10:42.207373 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" event={"ID":"14c6a800-2915-4a09-81e2-191f3ee9551e","Type":"ContainerStarted","Data":"f751cd85b1329dc29b080172b1063c0f54adae5d5036f13adf45893de84367ea"} Feb 16 15:10:42 crc kubenswrapper[4835]: I0216 15:10:42.207400 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" event={"ID":"14c6a800-2915-4a09-81e2-191f3ee9551e","Type":"ContainerStarted","Data":"12b34c3aa7c2af446e3fc433ded968f70f4bc24d03caf31822a6e51ea75c938c"} Feb 16 15:10:42 crc kubenswrapper[4835]: I0216 15:10:42.207991 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:42 crc kubenswrapper[4835]: I0216 15:10:42.213519 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:42 crc kubenswrapper[4835]: I0216 15:10:42.242317 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" podStartSLOduration=8.242298647 podStartE2EDuration="8.242298647s" podCreationTimestamp="2026-02-16 15:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:10:42.2413069 +0000 UTC m=+191.533299815" watchObservedRunningTime="2026-02-16 15:10:42.242298647 +0000 UTC m=+191.534291542" Feb 16 15:10:42 crc kubenswrapper[4835]: I0216 15:10:42.382980 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qgqwv" Feb 16 15:10:42 crc kubenswrapper[4835]: I0216 15:10:42.383049 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qgqwv" Feb 16 15:10:42 crc kubenswrapper[4835]: I0216 15:10:42.435649 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qgqwv" Feb 16 15:10:42 crc kubenswrapper[4835]: I0216 15:10:42.805221 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6rzr6" Feb 16 15:10:42 crc kubenswrapper[4835]: I0216 15:10:42.805701 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6rzr6" Feb 16 15:10:42 crc kubenswrapper[4835]: I0216 15:10:42.848986 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6rzr6" Feb 16 15:10:43 crc kubenswrapper[4835]: I0216 15:10:43.253550 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6rzr6" Feb 16 15:10:43 crc kubenswrapper[4835]: I0216 15:10:43.276231 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qgqwv" Feb 16 15:10:43 crc kubenswrapper[4835]: I0216 15:10:43.470481 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 15:10:43 crc kubenswrapper[4835]: I0216 15:10:43.613944 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4cf4baa-96a9-4e95-85a3-67418ff831ca-kubelet-dir\") pod \"b4cf4baa-96a9-4e95-85a3-67418ff831ca\" (UID: \"b4cf4baa-96a9-4e95-85a3-67418ff831ca\") " Feb 16 15:10:43 crc kubenswrapper[4835]: I0216 15:10:43.614018 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4cf4baa-96a9-4e95-85a3-67418ff831ca-kube-api-access\") pod \"b4cf4baa-96a9-4e95-85a3-67418ff831ca\" (UID: \"b4cf4baa-96a9-4e95-85a3-67418ff831ca\") " Feb 16 15:10:43 crc kubenswrapper[4835]: I0216 15:10:43.614053 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4cf4baa-96a9-4e95-85a3-67418ff831ca-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b4cf4baa-96a9-4e95-85a3-67418ff831ca" (UID: "b4cf4baa-96a9-4e95-85a3-67418ff831ca"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:10:43 crc kubenswrapper[4835]: I0216 15:10:43.614252 4835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4cf4baa-96a9-4e95-85a3-67418ff831ca-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:43 crc kubenswrapper[4835]: I0216 15:10:43.621196 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4cf4baa-96a9-4e95-85a3-67418ff831ca-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b4cf4baa-96a9-4e95-85a3-67418ff831ca" (UID: "b4cf4baa-96a9-4e95-85a3-67418ff831ca"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:10:43 crc kubenswrapper[4835]: I0216 15:10:43.715924 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4cf4baa-96a9-4e95-85a3-67418ff831ca-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:44 crc kubenswrapper[4835]: I0216 15:10:44.217655 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98n7v" event={"ID":"61564e44-b4e6-4a57-9232-3403b0173aa6","Type":"ContainerStarted","Data":"e2af1963e6f81888dba9b7cfb57c32d9f650a6cb5ea40e9b82a2ae4f6a59c61a"} Feb 16 15:10:44 crc kubenswrapper[4835]: I0216 15:10:44.221193 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 15:10:44 crc kubenswrapper[4835]: I0216 15:10:44.221619 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b4cf4baa-96a9-4e95-85a3-67418ff831ca","Type":"ContainerDied","Data":"b77a4b5608b54e3bd44eef3467d8edb8e38d5bd86095f4d0369cf5e860473f0b"} Feb 16 15:10:44 crc kubenswrapper[4835]: I0216 15:10:44.221685 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b77a4b5608b54e3bd44eef3467d8edb8e38d5bd86095f4d0369cf5e860473f0b" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.108610 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rzr6"] Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.115510 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 15:10:45 crc kubenswrapper[4835]: E0216 15:10:45.115736 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cf4baa-96a9-4e95-85a3-67418ff831ca" containerName="pruner" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.115748 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cf4baa-96a9-4e95-85a3-67418ff831ca" containerName="pruner" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.115852 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4cf4baa-96a9-4e95-85a3-67418ff831ca" containerName="pruner" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.116228 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.120365 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.120451 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.123465 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.133068 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4699a27a-8190-4caf-bf07-ff741058b280-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4699a27a-8190-4caf-bf07-ff741058b280\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.133786 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4699a27a-8190-4caf-bf07-ff741058b280-var-lock\") pod \"installer-9-crc\" (UID: \"4699a27a-8190-4caf-bf07-ff741058b280\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.133949 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4699a27a-8190-4caf-bf07-ff741058b280-kube-api-access\") pod \"installer-9-crc\" (UID: \"4699a27a-8190-4caf-bf07-ff741058b280\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.227038 4835 generic.go:334] "Generic (PLEG): container finished" podID="61564e44-b4e6-4a57-9232-3403b0173aa6" containerID="e2af1963e6f81888dba9b7cfb57c32d9f650a6cb5ea40e9b82a2ae4f6a59c61a" exitCode=0 Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.227236 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6rzr6" podUID="b5901620-08e0-4ade-974d-e8c241526ff1" containerName="registry-server" containerID="cri-o://3a843a475860b346804d6e9ea7b806cf56524096429d23d43fd04229a7ee2db5" gracePeriod=2 Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.227503 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98n7v" event={"ID":"61564e44-b4e6-4a57-9232-3403b0173aa6","Type":"ContainerDied","Data":"e2af1963e6f81888dba9b7cfb57c32d9f650a6cb5ea40e9b82a2ae4f6a59c61a"} Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.237870 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4699a27a-8190-4caf-bf07-ff741058b280-var-lock\") pod \"installer-9-crc\" (UID: \"4699a27a-8190-4caf-bf07-ff741058b280\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.238190 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4699a27a-8190-4caf-bf07-ff741058b280-kube-api-access\") pod \"installer-9-crc\" (UID: \"4699a27a-8190-4caf-bf07-ff741058b280\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.239555 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4699a27a-8190-4caf-bf07-ff741058b280-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4699a27a-8190-4caf-bf07-ff741058b280\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.238080 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4699a27a-8190-4caf-bf07-ff741058b280-var-lock\") pod \"installer-9-crc\" (UID: \"4699a27a-8190-4caf-bf07-ff741058b280\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.240908 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4699a27a-8190-4caf-bf07-ff741058b280-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4699a27a-8190-4caf-bf07-ff741058b280\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.261091 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4699a27a-8190-4caf-bf07-ff741058b280-kube-api-access\") pod \"installer-9-crc\" (UID: \"4699a27a-8190-4caf-bf07-ff741058b280\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.446294 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.675142 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rzr6" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.747141 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqk5s\" (UniqueName: \"kubernetes.io/projected/b5901620-08e0-4ade-974d-e8c241526ff1-kube-api-access-qqk5s\") pod \"b5901620-08e0-4ade-974d-e8c241526ff1\" (UID: \"b5901620-08e0-4ade-974d-e8c241526ff1\") " Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.747234 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5901620-08e0-4ade-974d-e8c241526ff1-utilities\") pod \"b5901620-08e0-4ade-974d-e8c241526ff1\" (UID: \"b5901620-08e0-4ade-974d-e8c241526ff1\") " Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.747278 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5901620-08e0-4ade-974d-e8c241526ff1-catalog-content\") pod \"b5901620-08e0-4ade-974d-e8c241526ff1\" (UID: \"b5901620-08e0-4ade-974d-e8c241526ff1\") " Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.748032 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5901620-08e0-4ade-974d-e8c241526ff1-utilities" (OuterVolumeSpecName: "utilities") pod "b5901620-08e0-4ade-974d-e8c241526ff1" (UID: "b5901620-08e0-4ade-974d-e8c241526ff1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.752808 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5901620-08e0-4ade-974d-e8c241526ff1-kube-api-access-qqk5s" (OuterVolumeSpecName: "kube-api-access-qqk5s") pod "b5901620-08e0-4ade-974d-e8c241526ff1" (UID: "b5901620-08e0-4ade-974d-e8c241526ff1"). InnerVolumeSpecName "kube-api-access-qqk5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.771736 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5901620-08e0-4ade-974d-e8c241526ff1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5901620-08e0-4ade-974d-e8c241526ff1" (UID: "b5901620-08e0-4ade-974d-e8c241526ff1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.846815 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.848990 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqk5s\" (UniqueName: \"kubernetes.io/projected/b5901620-08e0-4ade-974d-e8c241526ff1-kube-api-access-qqk5s\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.849024 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5901620-08e0-4ade-974d-e8c241526ff1-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:45 crc kubenswrapper[4835]: I0216 15:10:45.849037 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5901620-08e0-4ade-974d-e8c241526ff1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:45 crc kubenswrapper[4835]: W0216 15:10:45.851481 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4699a27a_8190_4caf_bf07_ff741058b280.slice/crio-1ba6e7a633f96cff42b31bdd4ea6e2318844a8c91b14f2a48bd11d1f192a5e76 WatchSource:0}: Error finding container 1ba6e7a633f96cff42b31bdd4ea6e2318844a8c91b14f2a48bd11d1f192a5e76: Status 404 returned error can't find the container with id 1ba6e7a633f96cff42b31bdd4ea6e2318844a8c91b14f2a48bd11d1f192a5e76 Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.233934 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4699a27a-8190-4caf-bf07-ff741058b280","Type":"ContainerStarted","Data":"213d0a7e19a9f3f094fba0d38b5f5c584f19615510480c98e30dd6849d3de362"} Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.235623 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4699a27a-8190-4caf-bf07-ff741058b280","Type":"ContainerStarted","Data":"1ba6e7a633f96cff42b31bdd4ea6e2318844a8c91b14f2a48bd11d1f192a5e76"} Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.235906 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98n7v" event={"ID":"61564e44-b4e6-4a57-9232-3403b0173aa6","Type":"ContainerStarted","Data":"500031298a724911eb905f65604150be3b60a7160f4b7fc31a41293537d6089c"} Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.238659 4835 generic.go:334] "Generic (PLEG): container finished" podID="b5901620-08e0-4ade-974d-e8c241526ff1" containerID="3a843a475860b346804d6e9ea7b806cf56524096429d23d43fd04229a7ee2db5" exitCode=0 Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.238689 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rzr6" event={"ID":"b5901620-08e0-4ade-974d-e8c241526ff1","Type":"ContainerDied","Data":"3a843a475860b346804d6e9ea7b806cf56524096429d23d43fd04229a7ee2db5"} Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.238697 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rzr6" Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.238704 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rzr6" event={"ID":"b5901620-08e0-4ade-974d-e8c241526ff1","Type":"ContainerDied","Data":"4adef6c0ca4911f0f94d1e5c1918d3cbbff6a1829ef9fc41b2dbbf068d32ba07"} Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.238721 4835 scope.go:117] "RemoveContainer" containerID="3a843a475860b346804d6e9ea7b806cf56524096429d23d43fd04229a7ee2db5" Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.259231 4835 scope.go:117] "RemoveContainer" containerID="8489373486a80067aaeaac7bf5fa987d68c2ac77c5b192c51b3a32dfb8da90d8" Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.264992 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.2649795240000001 podStartE2EDuration="1.264979524s" podCreationTimestamp="2026-02-16 15:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:10:46.255948116 +0000 UTC m=+195.547941001" watchObservedRunningTime="2026-02-16 15:10:46.264979524 +0000 UTC m=+195.556972419" Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.287389 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-98n7v" podStartSLOduration=2.159332763 podStartE2EDuration="46.287369551s" podCreationTimestamp="2026-02-16 15:10:00 +0000 UTC" firstStartedPulling="2026-02-16 15:10:01.604554391 +0000 UTC m=+150.896547286" lastFinishedPulling="2026-02-16 15:10:45.732591179 +0000 UTC m=+195.024584074" observedRunningTime="2026-02-16 15:10:46.280899063 +0000 UTC m=+195.572891958" watchObservedRunningTime="2026-02-16 15:10:46.287369551 +0000 UTC m=+195.579362446" Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.303647 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rzr6"] Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.306804 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rzr6"] Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.313946 4835 scope.go:117] "RemoveContainer" containerID="2a047c83a3f4926f2bd52fec600d23ccddf25d69bd9b4305724b4404f04ff6c6" Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.330987 4835 scope.go:117] "RemoveContainer" containerID="3a843a475860b346804d6e9ea7b806cf56524096429d23d43fd04229a7ee2db5" Feb 16 15:10:46 crc kubenswrapper[4835]: E0216 15:10:46.331710 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a843a475860b346804d6e9ea7b806cf56524096429d23d43fd04229a7ee2db5\": container with ID starting with 3a843a475860b346804d6e9ea7b806cf56524096429d23d43fd04229a7ee2db5 not found: ID does not exist" containerID="3a843a475860b346804d6e9ea7b806cf56524096429d23d43fd04229a7ee2db5" Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.331767 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a843a475860b346804d6e9ea7b806cf56524096429d23d43fd04229a7ee2db5"} err="failed to get container status \"3a843a475860b346804d6e9ea7b806cf56524096429d23d43fd04229a7ee2db5\": rpc error: code = NotFound desc = could not find container \"3a843a475860b346804d6e9ea7b806cf56524096429d23d43fd04229a7ee2db5\": container with ID starting with 3a843a475860b346804d6e9ea7b806cf56524096429d23d43fd04229a7ee2db5 not found: ID does not exist" Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.331798 4835 scope.go:117] "RemoveContainer" containerID="8489373486a80067aaeaac7bf5fa987d68c2ac77c5b192c51b3a32dfb8da90d8" Feb 16 15:10:46 crc kubenswrapper[4835]: E0216 15:10:46.332208 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8489373486a80067aaeaac7bf5fa987d68c2ac77c5b192c51b3a32dfb8da90d8\": container with ID starting with 8489373486a80067aaeaac7bf5fa987d68c2ac77c5b192c51b3a32dfb8da90d8 not found: ID does not exist" containerID="8489373486a80067aaeaac7bf5fa987d68c2ac77c5b192c51b3a32dfb8da90d8" Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.332304 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8489373486a80067aaeaac7bf5fa987d68c2ac77c5b192c51b3a32dfb8da90d8"} err="failed to get container status \"8489373486a80067aaeaac7bf5fa987d68c2ac77c5b192c51b3a32dfb8da90d8\": rpc error: code = NotFound desc = could not find container \"8489373486a80067aaeaac7bf5fa987d68c2ac77c5b192c51b3a32dfb8da90d8\": container with ID starting with 8489373486a80067aaeaac7bf5fa987d68c2ac77c5b192c51b3a32dfb8da90d8 not found: ID does not exist" Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.332377 4835 scope.go:117] "RemoveContainer" containerID="2a047c83a3f4926f2bd52fec600d23ccddf25d69bd9b4305724b4404f04ff6c6" Feb 16 15:10:46 crc kubenswrapper[4835]: E0216 15:10:46.333149 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a047c83a3f4926f2bd52fec600d23ccddf25d69bd9b4305724b4404f04ff6c6\": container with ID starting with 2a047c83a3f4926f2bd52fec600d23ccddf25d69bd9b4305724b4404f04ff6c6 not found: ID does not exist" containerID="2a047c83a3f4926f2bd52fec600d23ccddf25d69bd9b4305724b4404f04ff6c6" Feb 16 15:10:46 crc kubenswrapper[4835]: I0216 15:10:46.333181 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a047c83a3f4926f2bd52fec600d23ccddf25d69bd9b4305724b4404f04ff6c6"} err="failed to get container status \"2a047c83a3f4926f2bd52fec600d23ccddf25d69bd9b4305724b4404f04ff6c6\": rpc error: code = NotFound desc = could not find container \"2a047c83a3f4926f2bd52fec600d23ccddf25d69bd9b4305724b4404f04ff6c6\": container with ID starting with 2a047c83a3f4926f2bd52fec600d23ccddf25d69bd9b4305724b4404f04ff6c6 not found: ID does not exist" Feb 16 15:10:47 crc kubenswrapper[4835]: I0216 15:10:47.245686 4835 generic.go:334] "Generic (PLEG): container finished" podID="76be94b7-5f32-478a-81a6-51758b5f7280" containerID="3f7eaf8015c143b4922c419dc9e5fc87e6bcccd91af486b445210cb0be86390e" exitCode=0 Feb 16 15:10:47 crc kubenswrapper[4835]: I0216 15:10:47.245997 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlnj4" event={"ID":"76be94b7-5f32-478a-81a6-51758b5f7280","Type":"ContainerDied","Data":"3f7eaf8015c143b4922c419dc9e5fc87e6bcccd91af486b445210cb0be86390e"} Feb 16 15:10:47 crc kubenswrapper[4835]: I0216 15:10:47.386583 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5901620-08e0-4ade-974d-e8c241526ff1" path="/var/lib/kubelet/pods/b5901620-08e0-4ade-974d-e8c241526ff1/volumes" Feb 16 15:10:48 crc kubenswrapper[4835]: I0216 15:10:48.257431 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlnj4" event={"ID":"76be94b7-5f32-478a-81a6-51758b5f7280","Type":"ContainerStarted","Data":"8bbce4dbbfb16007441f36b548b83911325a4fc08309769f63d28a1955c6bfd5"} Feb 16 15:10:48 crc kubenswrapper[4835]: I0216 15:10:48.276060 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qlnj4" podStartSLOduration=2.225096229 podStartE2EDuration="48.276044526s" podCreationTimestamp="2026-02-16 15:10:00 +0000 UTC" firstStartedPulling="2026-02-16 15:10:01.608282394 +0000 UTC m=+150.900275299" lastFinishedPulling="2026-02-16 15:10:47.659230701 +0000 UTC m=+196.951223596" observedRunningTime="2026-02-16 15:10:48.274929505 +0000 UTC m=+197.566922470" watchObservedRunningTime="2026-02-16 15:10:48.276044526 +0000 UTC m=+197.568037421" Feb 16 15:10:48 crc kubenswrapper[4835]: I0216 15:10:48.586594 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:10:48 crc kubenswrapper[4835]: I0216 15:10:48.586675 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:10:50 crc kubenswrapper[4835]: I0216 15:10:50.246642 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wm95t" Feb 16 15:10:50 crc kubenswrapper[4835]: I0216 15:10:50.301677 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wm95t" Feb 16 15:10:50 crc kubenswrapper[4835]: I0216 15:10:50.431593 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-98n7v" Feb 16 15:10:50 crc kubenswrapper[4835]: I0216 15:10:50.431637 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-98n7v" Feb 16 15:10:50 crc kubenswrapper[4835]: I0216 15:10:50.465231 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-98n7v" Feb 16 15:10:50 crc kubenswrapper[4835]: I0216 15:10:50.804569 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qlnj4" Feb 16 15:10:50 crc kubenswrapper[4835]: I0216 15:10:50.804609 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qlnj4" Feb 16 15:10:50 crc kubenswrapper[4835]: I0216 15:10:50.845588 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qlnj4" Feb 16 15:10:51 crc kubenswrapper[4835]: I0216 15:10:51.310514 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-98n7v" Feb 16 15:10:52 crc kubenswrapper[4835]: I0216 15:10:52.278505 4835 generic.go:334] "Generic (PLEG): container finished" podID="2498fe6c-9af0-4225-8450-558085a67825" containerID="b8af7387b6f4e2441a6128fdb92e691a81a608733f98a83803e2b20bec2af7e0" exitCode=0 Feb 16 15:10:52 crc kubenswrapper[4835]: I0216 15:10:52.278892 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2j7l9" event={"ID":"2498fe6c-9af0-4225-8450-558085a67825","Type":"ContainerDied","Data":"b8af7387b6f4e2441a6128fdb92e691a81a608733f98a83803e2b20bec2af7e0"} Feb 16 15:10:54 crc kubenswrapper[4835]: I0216 15:10:54.182619 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59c6b7c975-4w4gz"] Feb 16 15:10:54 crc kubenswrapper[4835]: I0216 15:10:54.183268 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" podUID="1bcd7c7d-c134-476a-b6e7-f5120dae4eb9" containerName="controller-manager" containerID="cri-o://200bc50ae954c0975804274c7fa9c0fe563c8b7e090d56e807ea3ced0d3a9806" gracePeriod=30 Feb 16 15:10:54 crc kubenswrapper[4835]: I0216 15:10:54.190227 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r"] Feb 16 15:10:54 crc kubenswrapper[4835]: I0216 15:10:54.190728 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" podUID="14c6a800-2915-4a09-81e2-191f3ee9551e" containerName="route-controller-manager" containerID="cri-o://f751cd85b1329dc29b080172b1063c0f54adae5d5036f13adf45893de84367ea" gracePeriod=30 Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.295375 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxxft" event={"ID":"c7361241-f3c4-483a-9aa8-d1af72ab348b","Type":"ContainerStarted","Data":"ffd7c58595efc15696ba9718c64657bc120fa834f1839316561108f8c3b6a44f"} Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.297158 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2j7l9" event={"ID":"2498fe6c-9af0-4225-8450-558085a67825","Type":"ContainerStarted","Data":"a1624364cf99f895c702c61701dafda4a8834c269f4c46d01b4b8b68f7f72c95"} Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.298772 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2blp8" event={"ID":"822c5e9d-78fa-4c80-b3f4-e3a0310020a2","Type":"ContainerStarted","Data":"74a68d64b67df0e1df61c394fc889da3990fe228bfd065c7be2152634bcdb11b"} Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.300112 4835 generic.go:334] "Generic (PLEG): container finished" podID="14c6a800-2915-4a09-81e2-191f3ee9551e" containerID="f751cd85b1329dc29b080172b1063c0f54adae5d5036f13adf45893de84367ea" exitCode=0 Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.300167 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" event={"ID":"14c6a800-2915-4a09-81e2-191f3ee9551e","Type":"ContainerDied","Data":"f751cd85b1329dc29b080172b1063c0f54adae5d5036f13adf45893de84367ea"} Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.301343 4835 generic.go:334] "Generic (PLEG): container finished" podID="1bcd7c7d-c134-476a-b6e7-f5120dae4eb9" containerID="200bc50ae954c0975804274c7fa9c0fe563c8b7e090d56e807ea3ced0d3a9806" exitCode=0 Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.301373 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" event={"ID":"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9","Type":"ContainerDied","Data":"200bc50ae954c0975804274c7fa9c0fe563c8b7e090d56e807ea3ced0d3a9806"} Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.333166 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2j7l9" podStartSLOduration=3.108244427 podStartE2EDuration="52.333146779s" podCreationTimestamp="2026-02-16 15:10:03 +0000 UTC" firstStartedPulling="2026-02-16 15:10:05.712575589 +0000 UTC m=+155.004568484" lastFinishedPulling="2026-02-16 15:10:54.937477901 +0000 UTC m=+204.229470836" observedRunningTime="2026-02-16 15:10:55.33067494 +0000 UTC m=+204.622667835" watchObservedRunningTime="2026-02-16 15:10:55.333146779 +0000 UTC m=+204.625139674" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.396105 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.400938 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.422343 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2"] Feb 16 15:10:55 crc kubenswrapper[4835]: E0216 15:10:55.422573 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5901620-08e0-4ade-974d-e8c241526ff1" containerName="extract-utilities" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.422585 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5901620-08e0-4ade-974d-e8c241526ff1" containerName="extract-utilities" Feb 16 15:10:55 crc kubenswrapper[4835]: E0216 15:10:55.422597 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c6a800-2915-4a09-81e2-191f3ee9551e" containerName="route-controller-manager" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.422603 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c6a800-2915-4a09-81e2-191f3ee9551e" containerName="route-controller-manager" Feb 16 15:10:55 crc kubenswrapper[4835]: E0216 15:10:55.422614 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5901620-08e0-4ade-974d-e8c241526ff1" containerName="extract-content" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.422620 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5901620-08e0-4ade-974d-e8c241526ff1" containerName="extract-content" Feb 16 15:10:55 crc kubenswrapper[4835]: E0216 15:10:55.422633 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bcd7c7d-c134-476a-b6e7-f5120dae4eb9" containerName="controller-manager" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.422638 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bcd7c7d-c134-476a-b6e7-f5120dae4eb9" containerName="controller-manager" Feb 16 15:10:55 crc kubenswrapper[4835]: E0216 15:10:55.422646 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5901620-08e0-4ade-974d-e8c241526ff1" containerName="registry-server" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.422651 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5901620-08e0-4ade-974d-e8c241526ff1" containerName="registry-server" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.422739 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5901620-08e0-4ade-974d-e8c241526ff1" containerName="registry-server" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.422750 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c6a800-2915-4a09-81e2-191f3ee9551e" containerName="route-controller-manager" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.422758 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bcd7c7d-c134-476a-b6e7-f5120dae4eb9" containerName="controller-manager" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.423081 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.435139 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2"] Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.477478 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14c6a800-2915-4a09-81e2-191f3ee9551e-client-ca\") pod \"14c6a800-2915-4a09-81e2-191f3ee9551e\" (UID: \"14c6a800-2915-4a09-81e2-191f3ee9551e\") " Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.477585 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c6a800-2915-4a09-81e2-191f3ee9551e-config\") pod \"14c6a800-2915-4a09-81e2-191f3ee9551e\" (UID: \"14c6a800-2915-4a09-81e2-191f3ee9551e\") " Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.477606 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-config\") pod \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.477626 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rpcb\" (UniqueName: \"kubernetes.io/projected/14c6a800-2915-4a09-81e2-191f3ee9551e-kube-api-access-7rpcb\") pod \"14c6a800-2915-4a09-81e2-191f3ee9551e\" (UID: \"14c6a800-2915-4a09-81e2-191f3ee9551e\") " Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.477647 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbkx9\" (UniqueName: \"kubernetes.io/projected/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-kube-api-access-zbkx9\") pod \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.477667 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-serving-cert\") pod \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.477683 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14c6a800-2915-4a09-81e2-191f3ee9551e-serving-cert\") pod \"14c6a800-2915-4a09-81e2-191f3ee9551e\" (UID: \"14c6a800-2915-4a09-81e2-191f3ee9551e\") " Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.477697 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-proxy-ca-bundles\") pod \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.477715 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-client-ca\") pod \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\" (UID: \"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9\") " Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.477802 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09072237-b063-4c9c-91f4-e7f771956438-client-ca\") pod \"route-controller-manager-7874658f57-kxdh2\" (UID: \"09072237-b063-4c9c-91f4-e7f771956438\") " pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.477834 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09072237-b063-4c9c-91f4-e7f771956438-config\") pod \"route-controller-manager-7874658f57-kxdh2\" (UID: \"09072237-b063-4c9c-91f4-e7f771956438\") " pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.477902 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvxqc\" (UniqueName: \"kubernetes.io/projected/09072237-b063-4c9c-91f4-e7f771956438-kube-api-access-wvxqc\") pod \"route-controller-manager-7874658f57-kxdh2\" (UID: \"09072237-b063-4c9c-91f4-e7f771956438\") " pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.477931 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09072237-b063-4c9c-91f4-e7f771956438-serving-cert\") pod \"route-controller-manager-7874658f57-kxdh2\" (UID: \"09072237-b063-4c9c-91f4-e7f771956438\") " pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.478356 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14c6a800-2915-4a09-81e2-191f3ee9551e-client-ca" (OuterVolumeSpecName: "client-ca") pod "14c6a800-2915-4a09-81e2-191f3ee9551e" (UID: "14c6a800-2915-4a09-81e2-191f3ee9551e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.478630 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14c6a800-2915-4a09-81e2-191f3ee9551e-config" (OuterVolumeSpecName: "config") pod "14c6a800-2915-4a09-81e2-191f3ee9551e" (UID: "14c6a800-2915-4a09-81e2-191f3ee9551e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.478884 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-client-ca" (OuterVolumeSpecName: "client-ca") pod "1bcd7c7d-c134-476a-b6e7-f5120dae4eb9" (UID: "1bcd7c7d-c134-476a-b6e7-f5120dae4eb9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.478995 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-config" (OuterVolumeSpecName: "config") pod "1bcd7c7d-c134-476a-b6e7-f5120dae4eb9" (UID: "1bcd7c7d-c134-476a-b6e7-f5120dae4eb9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.483343 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c6a800-2915-4a09-81e2-191f3ee9551e-kube-api-access-7rpcb" (OuterVolumeSpecName: "kube-api-access-7rpcb") pod "14c6a800-2915-4a09-81e2-191f3ee9551e" (UID: "14c6a800-2915-4a09-81e2-191f3ee9551e"). InnerVolumeSpecName "kube-api-access-7rpcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.483694 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1bcd7c7d-c134-476a-b6e7-f5120dae4eb9" (UID: "1bcd7c7d-c134-476a-b6e7-f5120dae4eb9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.483937 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14c6a800-2915-4a09-81e2-191f3ee9551e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "14c6a800-2915-4a09-81e2-191f3ee9551e" (UID: "14c6a800-2915-4a09-81e2-191f3ee9551e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.484239 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-kube-api-access-zbkx9" (OuterVolumeSpecName: "kube-api-access-zbkx9") pod "1bcd7c7d-c134-476a-b6e7-f5120dae4eb9" (UID: "1bcd7c7d-c134-476a-b6e7-f5120dae4eb9"). InnerVolumeSpecName "kube-api-access-zbkx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.498326 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bcd7c7d-c134-476a-b6e7-f5120dae4eb9" (UID: "1bcd7c7d-c134-476a-b6e7-f5120dae4eb9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.578718 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvxqc\" (UniqueName: \"kubernetes.io/projected/09072237-b063-4c9c-91f4-e7f771956438-kube-api-access-wvxqc\") pod \"route-controller-manager-7874658f57-kxdh2\" (UID: \"09072237-b063-4c9c-91f4-e7f771956438\") " pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.578771 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09072237-b063-4c9c-91f4-e7f771956438-serving-cert\") pod \"route-controller-manager-7874658f57-kxdh2\" (UID: \"09072237-b063-4c9c-91f4-e7f771956438\") " pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.578796 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09072237-b063-4c9c-91f4-e7f771956438-client-ca\") pod \"route-controller-manager-7874658f57-kxdh2\" (UID: \"09072237-b063-4c9c-91f4-e7f771956438\") " pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.578826 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09072237-b063-4c9c-91f4-e7f771956438-config\") pod \"route-controller-manager-7874658f57-kxdh2\" (UID: \"09072237-b063-4c9c-91f4-e7f771956438\") " pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.578874 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.578885 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14c6a800-2915-4a09-81e2-191f3ee9551e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.578894 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.578903 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.578913 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14c6a800-2915-4a09-81e2-191f3ee9551e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.578921 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c6a800-2915-4a09-81e2-191f3ee9551e-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.578928 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.578937 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rpcb\" (UniqueName: \"kubernetes.io/projected/14c6a800-2915-4a09-81e2-191f3ee9551e-kube-api-access-7rpcb\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.578945 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbkx9\" (UniqueName: \"kubernetes.io/projected/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9-kube-api-access-zbkx9\") on node \"crc\" DevicePath \"\"" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.579965 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09072237-b063-4c9c-91f4-e7f771956438-config\") pod \"route-controller-manager-7874658f57-kxdh2\" (UID: \"09072237-b063-4c9c-91f4-e7f771956438\") " pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.587003 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09072237-b063-4c9c-91f4-e7f771956438-client-ca\") pod \"route-controller-manager-7874658f57-kxdh2\" (UID: \"09072237-b063-4c9c-91f4-e7f771956438\") " pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.588839 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09072237-b063-4c9c-91f4-e7f771956438-serving-cert\") pod \"route-controller-manager-7874658f57-kxdh2\" (UID: \"09072237-b063-4c9c-91f4-e7f771956438\") " pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.595779 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvxqc\" (UniqueName: \"kubernetes.io/projected/09072237-b063-4c9c-91f4-e7f771956438-kube-api-access-wvxqc\") pod \"route-controller-manager-7874658f57-kxdh2\" (UID: \"09072237-b063-4c9c-91f4-e7f771956438\") " pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:10:55 crc kubenswrapper[4835]: I0216 15:10:55.760296 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:10:56 crc kubenswrapper[4835]: I0216 15:10:56.203071 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2"] Feb 16 15:10:56 crc kubenswrapper[4835]: W0216 15:10:56.207222 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09072237_b063_4c9c_91f4_e7f771956438.slice/crio-93a46383c1d35f90b38e1de5c3fa0b2dbfe29626d70ef14cee9eb85ab46940db WatchSource:0}: Error finding container 93a46383c1d35f90b38e1de5c3fa0b2dbfe29626d70ef14cee9eb85ab46940db: Status 404 returned error can't find the container with id 93a46383c1d35f90b38e1de5c3fa0b2dbfe29626d70ef14cee9eb85ab46940db Feb 16 15:10:56 crc kubenswrapper[4835]: I0216 15:10:56.330054 4835 generic.go:334] "Generic (PLEG): container finished" podID="822c5e9d-78fa-4c80-b3f4-e3a0310020a2" containerID="74a68d64b67df0e1df61c394fc889da3990fe228bfd065c7be2152634bcdb11b" exitCode=0 Feb 16 15:10:56 crc kubenswrapper[4835]: I0216 15:10:56.330181 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2blp8" event={"ID":"822c5e9d-78fa-4c80-b3f4-e3a0310020a2","Type":"ContainerDied","Data":"74a68d64b67df0e1df61c394fc889da3990fe228bfd065c7be2152634bcdb11b"} Feb 16 15:10:56 crc kubenswrapper[4835]: I0216 15:10:56.340987 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" event={"ID":"14c6a800-2915-4a09-81e2-191f3ee9551e","Type":"ContainerDied","Data":"12b34c3aa7c2af446e3fc433ded968f70f4bc24d03caf31822a6e51ea75c938c"} Feb 16 15:10:56 crc kubenswrapper[4835]: I0216 15:10:56.341429 4835 scope.go:117] "RemoveContainer" containerID="f751cd85b1329dc29b080172b1063c0f54adae5d5036f13adf45893de84367ea" Feb 16 15:10:56 crc kubenswrapper[4835]: I0216 15:10:56.341757 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r" Feb 16 15:10:56 crc kubenswrapper[4835]: I0216 15:10:56.349945 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" event={"ID":"1bcd7c7d-c134-476a-b6e7-f5120dae4eb9","Type":"ContainerDied","Data":"8c1a93964635c0ca7d683cb9af0a0434f6041b8cd282db5e2f2b72598aaaa12f"} Feb 16 15:10:56 crc kubenswrapper[4835]: I0216 15:10:56.350040 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c6b7c975-4w4gz" Feb 16 15:10:56 crc kubenswrapper[4835]: I0216 15:10:56.367397 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" event={"ID":"09072237-b063-4c9c-91f4-e7f771956438","Type":"ContainerStarted","Data":"93a46383c1d35f90b38e1de5c3fa0b2dbfe29626d70ef14cee9eb85ab46940db"} Feb 16 15:10:56 crc kubenswrapper[4835]: I0216 15:10:56.369653 4835 generic.go:334] "Generic (PLEG): container finished" podID="c7361241-f3c4-483a-9aa8-d1af72ab348b" containerID="ffd7c58595efc15696ba9718c64657bc120fa834f1839316561108f8c3b6a44f" exitCode=0 Feb 16 15:10:56 crc kubenswrapper[4835]: I0216 15:10:56.369689 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxxft" event={"ID":"c7361241-f3c4-483a-9aa8-d1af72ab348b","Type":"ContainerDied","Data":"ffd7c58595efc15696ba9718c64657bc120fa834f1839316561108f8c3b6a44f"} Feb 16 15:10:56 crc kubenswrapper[4835]: I0216 15:10:56.372652 4835 scope.go:117] "RemoveContainer" containerID="200bc50ae954c0975804274c7fa9c0fe563c8b7e090d56e807ea3ced0d3a9806" Feb 16 15:10:56 crc kubenswrapper[4835]: I0216 15:10:56.401669 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59c6b7c975-4w4gz"] Feb 16 15:10:56 crc kubenswrapper[4835]: I0216 15:10:56.406584 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-59c6b7c975-4w4gz"] Feb 16 15:10:56 crc kubenswrapper[4835]: I0216 15:10:56.418488 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" podStartSLOduration=2.418467048 podStartE2EDuration="2.418467048s" podCreationTimestamp="2026-02-16 15:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:10:56.413770877 +0000 UTC m=+205.705763812" watchObservedRunningTime="2026-02-16 15:10:56.418467048 +0000 UTC m=+205.710459943" Feb 16 15:10:56 crc kubenswrapper[4835]: I0216 15:10:56.427599 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r"] Feb 16 15:10:56 crc kubenswrapper[4835]: I0216 15:10:56.432325 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69fcd7bf85-tlx9r"] Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.388452 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c6a800-2915-4a09-81e2-191f3ee9551e" path="/var/lib/kubelet/pods/14c6a800-2915-4a09-81e2-191f3ee9551e/volumes" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.390939 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bcd7c7d-c134-476a-b6e7-f5120dae4eb9" path="/var/lib/kubelet/pods/1bcd7c7d-c134-476a-b6e7-f5120dae4eb9/volumes" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.391856 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" event={"ID":"09072237-b063-4c9c-91f4-e7f771956438","Type":"ContainerStarted","Data":"4b2d139459ed796b2c142dc7c0c4a2dd0806c3192e0c950df3fe16115612ed73"} Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.391952 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.392039 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.789054 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-548fffd6f9-pqwdr"] Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.789812 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.791750 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.792380 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.792506 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.792979 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.793149 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.799492 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.812995 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-548fffd6f9-pqwdr"] Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.814200 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.814486 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b37040b-1fa7-4f83-9292-c49a3e4057cb-serving-cert\") pod \"controller-manager-548fffd6f9-pqwdr\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.814585 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-config\") pod \"controller-manager-548fffd6f9-pqwdr\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.814622 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-proxy-ca-bundles\") pod \"controller-manager-548fffd6f9-pqwdr\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.814844 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swj2t\" (UniqueName: \"kubernetes.io/projected/0b37040b-1fa7-4f83-9292-c49a3e4057cb-kube-api-access-swj2t\") pod \"controller-manager-548fffd6f9-pqwdr\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.814888 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-client-ca\") pod \"controller-manager-548fffd6f9-pqwdr\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.915782 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b37040b-1fa7-4f83-9292-c49a3e4057cb-serving-cert\") pod \"controller-manager-548fffd6f9-pqwdr\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.916187 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-config\") pod \"controller-manager-548fffd6f9-pqwdr\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.916208 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-proxy-ca-bundles\") pod \"controller-manager-548fffd6f9-pqwdr\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.916270 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swj2t\" (UniqueName: \"kubernetes.io/projected/0b37040b-1fa7-4f83-9292-c49a3e4057cb-kube-api-access-swj2t\") pod \"controller-manager-548fffd6f9-pqwdr\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.916292 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-client-ca\") pod \"controller-manager-548fffd6f9-pqwdr\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.917899 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-proxy-ca-bundles\") pod \"controller-manager-548fffd6f9-pqwdr\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.917919 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-client-ca\") pod \"controller-manager-548fffd6f9-pqwdr\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.918575 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-config\") pod \"controller-manager-548fffd6f9-pqwdr\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.923443 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b37040b-1fa7-4f83-9292-c49a3e4057cb-serving-cert\") pod \"controller-manager-548fffd6f9-pqwdr\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:10:57 crc kubenswrapper[4835]: I0216 15:10:57.932777 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swj2t\" (UniqueName: \"kubernetes.io/projected/0b37040b-1fa7-4f83-9292-c49a3e4057cb-kube-api-access-swj2t\") pod \"controller-manager-548fffd6f9-pqwdr\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:10:58 crc kubenswrapper[4835]: I0216 15:10:58.114382 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:10:58 crc kubenswrapper[4835]: I0216 15:10:58.354964 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-548fffd6f9-pqwdr"] Feb 16 15:10:58 crc kubenswrapper[4835]: W0216 15:10:58.363476 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b37040b_1fa7_4f83_9292_c49a3e4057cb.slice/crio-7daf728f308fc83bbee8cb3aaf9b78642f4c78da57a91743fa42d56311153201 WatchSource:0}: Error finding container 7daf728f308fc83bbee8cb3aaf9b78642f4c78da57a91743fa42d56311153201: Status 404 returned error can't find the container with id 7daf728f308fc83bbee8cb3aaf9b78642f4c78da57a91743fa42d56311153201 Feb 16 15:10:58 crc kubenswrapper[4835]: I0216 15:10:58.382962 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" event={"ID":"0b37040b-1fa7-4f83-9292-c49a3e4057cb","Type":"ContainerStarted","Data":"7daf728f308fc83bbee8cb3aaf9b78642f4c78da57a91743fa42d56311153201"} Feb 16 15:10:58 crc kubenswrapper[4835]: I0216 15:10:58.384970 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxxft" event={"ID":"c7361241-f3c4-483a-9aa8-d1af72ab348b","Type":"ContainerStarted","Data":"1d5139af19372bf7d701aa7d1f681d04da0c77b23fa048a6acd35084b611912e"} Feb 16 15:10:58 crc kubenswrapper[4835]: I0216 15:10:58.388629 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2blp8" event={"ID":"822c5e9d-78fa-4c80-b3f4-e3a0310020a2","Type":"ContainerStarted","Data":"fe6dcec79f6d412d8aab756a0db102bd4d13e9633133f4d50f974a9b89d5ddc4"} Feb 16 15:10:58 crc kubenswrapper[4835]: I0216 15:10:58.403140 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kxxft" podStartSLOduration=2.443180143 podStartE2EDuration="58.403127791s" podCreationTimestamp="2026-02-16 15:10:00 +0000 UTC" firstStartedPulling="2026-02-16 15:10:01.602819653 +0000 UTC m=+150.894812568" lastFinishedPulling="2026-02-16 15:10:57.562767311 +0000 UTC m=+206.854760216" observedRunningTime="2026-02-16 15:10:58.4013251 +0000 UTC m=+207.693317995" watchObservedRunningTime="2026-02-16 15:10:58.403127791 +0000 UTC m=+207.695120676" Feb 16 15:10:58 crc kubenswrapper[4835]: I0216 15:10:58.416971 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2blp8" podStartSLOduration=3.597543185 podStartE2EDuration="55.416961976s" podCreationTimestamp="2026-02-16 15:10:03 +0000 UTC" firstStartedPulling="2026-02-16 15:10:05.732905109 +0000 UTC m=+155.024897994" lastFinishedPulling="2026-02-16 15:10:57.55232388 +0000 UTC m=+206.844316785" observedRunningTime="2026-02-16 15:10:58.41530329 +0000 UTC m=+207.707296185" watchObservedRunningTime="2026-02-16 15:10:58.416961976 +0000 UTC m=+207.708954871" Feb 16 15:10:59 crc kubenswrapper[4835]: I0216 15:10:59.395454 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" event={"ID":"0b37040b-1fa7-4f83-9292-c49a3e4057cb","Type":"ContainerStarted","Data":"688d2800af039e4e16a28b809eb18ea07f2e9e65c5d8b018dcd5c21c115e6411"} Feb 16 15:11:00 crc kubenswrapper[4835]: I0216 15:11:00.401579 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:11:00 crc kubenswrapper[4835]: I0216 15:11:00.427254 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:11:00 crc kubenswrapper[4835]: I0216 15:11:00.445172 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" podStartSLOduration=6.445149449 podStartE2EDuration="6.445149449s" podCreationTimestamp="2026-02-16 15:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:10:59.413617798 +0000 UTC m=+208.705610703" watchObservedRunningTime="2026-02-16 15:11:00.445149449 +0000 UTC m=+209.737142344" Feb 16 15:11:00 crc kubenswrapper[4835]: I0216 15:11:00.607198 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kxxft" Feb 16 15:11:00 crc kubenswrapper[4835]: I0216 15:11:00.607256 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kxxft" Feb 16 15:11:00 crc kubenswrapper[4835]: I0216 15:11:00.647538 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kxxft" Feb 16 15:11:00 crc kubenswrapper[4835]: I0216 15:11:00.846648 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qlnj4" Feb 16 15:11:03 crc kubenswrapper[4835]: I0216 15:11:03.457317 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2blp8" Feb 16 15:11:03 crc kubenswrapper[4835]: I0216 15:11:03.457714 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2blp8" Feb 16 15:11:03 crc kubenswrapper[4835]: I0216 15:11:03.624118 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" podUID="6c28e183-5341-482b-9104-4ca0b17d4f3c" containerName="oauth-openshift" containerID="cri-o://3c4bfde6d5301666ec13f438f14b24539d421aeeb58e6714530bfcbc68e43492" gracePeriod=15 Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.064151 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.102605 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-trusted-ca-bundle\") pod \"6c28e183-5341-482b-9104-4ca0b17d4f3c\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.102781 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5vh\" (UniqueName: \"kubernetes.io/projected/6c28e183-5341-482b-9104-4ca0b17d4f3c-kube-api-access-qg5vh\") pod \"6c28e183-5341-482b-9104-4ca0b17d4f3c\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.102668 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2j7l9" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.102852 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2j7l9" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.102872 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-login\") pod \"6c28e183-5341-482b-9104-4ca0b17d4f3c\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.102939 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c28e183-5341-482b-9104-4ca0b17d4f3c-audit-dir\") pod \"6c28e183-5341-482b-9104-4ca0b17d4f3c\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.102974 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-router-certs\") pod \"6c28e183-5341-482b-9104-4ca0b17d4f3c\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.103045 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-serving-cert\") pod \"6c28e183-5341-482b-9104-4ca0b17d4f3c\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.103117 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-session\") pod \"6c28e183-5341-482b-9104-4ca0b17d4f3c\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.103189 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-service-ca\") pod \"6c28e183-5341-482b-9104-4ca0b17d4f3c\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.103224 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-cliconfig\") pod \"6c28e183-5341-482b-9104-4ca0b17d4f3c\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.103302 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-error\") pod \"6c28e183-5341-482b-9104-4ca0b17d4f3c\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.103365 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-ocp-branding-template\") pod \"6c28e183-5341-482b-9104-4ca0b17d4f3c\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.103394 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-provider-selection\") pod \"6c28e183-5341-482b-9104-4ca0b17d4f3c\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.103446 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-idp-0-file-data\") pod \"6c28e183-5341-482b-9104-4ca0b17d4f3c\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.103494 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-audit-policies\") pod \"6c28e183-5341-482b-9104-4ca0b17d4f3c\" (UID: \"6c28e183-5341-482b-9104-4ca0b17d4f3c\") " Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.104027 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6c28e183-5341-482b-9104-4ca0b17d4f3c" (UID: "6c28e183-5341-482b-9104-4ca0b17d4f3c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.104277 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c28e183-5341-482b-9104-4ca0b17d4f3c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "6c28e183-5341-482b-9104-4ca0b17d4f3c" (UID: "6c28e183-5341-482b-9104-4ca0b17d4f3c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.104568 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6c28e183-5341-482b-9104-4ca0b17d4f3c" (UID: "6c28e183-5341-482b-9104-4ca0b17d4f3c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.104892 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6c28e183-5341-482b-9104-4ca0b17d4f3c" (UID: "6c28e183-5341-482b-9104-4ca0b17d4f3c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.125801 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c28e183-5341-482b-9104-4ca0b17d4f3c-kube-api-access-qg5vh" (OuterVolumeSpecName: "kube-api-access-qg5vh") pod "6c28e183-5341-482b-9104-4ca0b17d4f3c" (UID: "6c28e183-5341-482b-9104-4ca0b17d4f3c"). InnerVolumeSpecName "kube-api-access-qg5vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.126038 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6c28e183-5341-482b-9104-4ca0b17d4f3c" (UID: "6c28e183-5341-482b-9104-4ca0b17d4f3c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.126275 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6c28e183-5341-482b-9104-4ca0b17d4f3c" (UID: "6c28e183-5341-482b-9104-4ca0b17d4f3c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.126786 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6c28e183-5341-482b-9104-4ca0b17d4f3c" (UID: "6c28e183-5341-482b-9104-4ca0b17d4f3c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.127103 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6c28e183-5341-482b-9104-4ca0b17d4f3c" (UID: "6c28e183-5341-482b-9104-4ca0b17d4f3c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.129798 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6c28e183-5341-482b-9104-4ca0b17d4f3c" (UID: "6c28e183-5341-482b-9104-4ca0b17d4f3c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.130736 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6c28e183-5341-482b-9104-4ca0b17d4f3c" (UID: "6c28e183-5341-482b-9104-4ca0b17d4f3c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.131187 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6c28e183-5341-482b-9104-4ca0b17d4f3c" (UID: "6c28e183-5341-482b-9104-4ca0b17d4f3c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.134230 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6c28e183-5341-482b-9104-4ca0b17d4f3c" (UID: "6c28e183-5341-482b-9104-4ca0b17d4f3c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.143980 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6c28e183-5341-482b-9104-4ca0b17d4f3c" (UID: "6c28e183-5341-482b-9104-4ca0b17d4f3c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.153626 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2j7l9" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.205417 4835 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.205454 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.205510 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5vh\" (UniqueName: \"kubernetes.io/projected/6c28e183-5341-482b-9104-4ca0b17d4f3c-kube-api-access-qg5vh\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.205523 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.205870 4835 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c28e183-5341-482b-9104-4ca0b17d4f3c-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.205972 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.205984 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.205994 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.206004 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.206014 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.206022 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.206031 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.206041 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.206073 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c28e183-5341-482b-9104-4ca0b17d4f3c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.423945 4835 generic.go:334] "Generic (PLEG): container finished" podID="6c28e183-5341-482b-9104-4ca0b17d4f3c" containerID="3c4bfde6d5301666ec13f438f14b24539d421aeeb58e6714530bfcbc68e43492" exitCode=0 Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.424624 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" event={"ID":"6c28e183-5341-482b-9104-4ca0b17d4f3c","Type":"ContainerDied","Data":"3c4bfde6d5301666ec13f438f14b24539d421aeeb58e6714530bfcbc68e43492"} Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.424686 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" event={"ID":"6c28e183-5341-482b-9104-4ca0b17d4f3c","Type":"ContainerDied","Data":"d87d109699bfee101e2ad22ac261f8b21cfa82e32bc516d2b6d7c4a16c6353cf"} Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.424707 4835 scope.go:117] "RemoveContainer" containerID="3c4bfde6d5301666ec13f438f14b24539d421aeeb58e6714530bfcbc68e43492" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.424657 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ztcpg" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.449259 4835 scope.go:117] "RemoveContainer" containerID="3c4bfde6d5301666ec13f438f14b24539d421aeeb58e6714530bfcbc68e43492" Feb 16 15:11:04 crc kubenswrapper[4835]: E0216 15:11:04.451770 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c4bfde6d5301666ec13f438f14b24539d421aeeb58e6714530bfcbc68e43492\": container with ID starting with 3c4bfde6d5301666ec13f438f14b24539d421aeeb58e6714530bfcbc68e43492 not found: ID does not exist" containerID="3c4bfde6d5301666ec13f438f14b24539d421aeeb58e6714530bfcbc68e43492" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.451815 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c4bfde6d5301666ec13f438f14b24539d421aeeb58e6714530bfcbc68e43492"} err="failed to get container status \"3c4bfde6d5301666ec13f438f14b24539d421aeeb58e6714530bfcbc68e43492\": rpc error: code = NotFound desc = could not find container \"3c4bfde6d5301666ec13f438f14b24539d421aeeb58e6714530bfcbc68e43492\": container with ID starting with 3c4bfde6d5301666ec13f438f14b24539d421aeeb58e6714530bfcbc68e43492 not found: ID does not exist" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.453709 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ztcpg"] Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.458474 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ztcpg"] Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.465362 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2j7l9" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.509608 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qlnj4"] Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.509872 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qlnj4" podUID="76be94b7-5f32-478a-81a6-51758b5f7280" containerName="registry-server" containerID="cri-o://8bbce4dbbfb16007441f36b548b83911325a4fc08309769f63d28a1955c6bfd5" gracePeriod=2 Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.515485 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2blp8" podUID="822c5e9d-78fa-4c80-b3f4-e3a0310020a2" containerName="registry-server" probeResult="failure" output=< Feb 16 15:11:04 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Feb 16 15:11:04 crc kubenswrapper[4835]: > Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.901631 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlnj4" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.914762 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flxvd\" (UniqueName: \"kubernetes.io/projected/76be94b7-5f32-478a-81a6-51758b5f7280-kube-api-access-flxvd\") pod \"76be94b7-5f32-478a-81a6-51758b5f7280\" (UID: \"76be94b7-5f32-478a-81a6-51758b5f7280\") " Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.914851 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76be94b7-5f32-478a-81a6-51758b5f7280-catalog-content\") pod \"76be94b7-5f32-478a-81a6-51758b5f7280\" (UID: \"76be94b7-5f32-478a-81a6-51758b5f7280\") " Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.914869 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76be94b7-5f32-478a-81a6-51758b5f7280-utilities\") pod \"76be94b7-5f32-478a-81a6-51758b5f7280\" (UID: \"76be94b7-5f32-478a-81a6-51758b5f7280\") " Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.915874 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76be94b7-5f32-478a-81a6-51758b5f7280-utilities" (OuterVolumeSpecName: "utilities") pod "76be94b7-5f32-478a-81a6-51758b5f7280" (UID: "76be94b7-5f32-478a-81a6-51758b5f7280"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.919969 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76be94b7-5f32-478a-81a6-51758b5f7280-kube-api-access-flxvd" (OuterVolumeSpecName: "kube-api-access-flxvd") pod "76be94b7-5f32-478a-81a6-51758b5f7280" (UID: "76be94b7-5f32-478a-81a6-51758b5f7280"). InnerVolumeSpecName "kube-api-access-flxvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:11:04 crc kubenswrapper[4835]: I0216 15:11:04.965912 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76be94b7-5f32-478a-81a6-51758b5f7280-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76be94b7-5f32-478a-81a6-51758b5f7280" (UID: "76be94b7-5f32-478a-81a6-51758b5f7280"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.016338 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flxvd\" (UniqueName: \"kubernetes.io/projected/76be94b7-5f32-478a-81a6-51758b5f7280-kube-api-access-flxvd\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.016370 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76be94b7-5f32-478a-81a6-51758b5f7280-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.016383 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76be94b7-5f32-478a-81a6-51758b5f7280-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.384007 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c28e183-5341-482b-9104-4ca0b17d4f3c" path="/var/lib/kubelet/pods/6c28e183-5341-482b-9104-4ca0b17d4f3c/volumes" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.432095 4835 generic.go:334] "Generic (PLEG): container finished" podID="76be94b7-5f32-478a-81a6-51758b5f7280" containerID="8bbce4dbbfb16007441f36b548b83911325a4fc08309769f63d28a1955c6bfd5" exitCode=0 Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.432161 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlnj4" event={"ID":"76be94b7-5f32-478a-81a6-51758b5f7280","Type":"ContainerDied","Data":"8bbce4dbbfb16007441f36b548b83911325a4fc08309769f63d28a1955c6bfd5"} Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.432187 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlnj4" event={"ID":"76be94b7-5f32-478a-81a6-51758b5f7280","Type":"ContainerDied","Data":"f436e969a6412b1bccbd6793d00650897cb8ddf50030f7cfb21db7e8d2c1c728"} Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.432183 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlnj4" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.432205 4835 scope.go:117] "RemoveContainer" containerID="8bbce4dbbfb16007441f36b548b83911325a4fc08309769f63d28a1955c6bfd5" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.447800 4835 scope.go:117] "RemoveContainer" containerID="3f7eaf8015c143b4922c419dc9e5fc87e6bcccd91af486b445210cb0be86390e" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.459212 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qlnj4"] Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.463056 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qlnj4"] Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.471802 4835 scope.go:117] "RemoveContainer" containerID="bf5a5b48fce6501f49e0e9d653286ed27683965c6d1aca9d669f3bc4a59cb941" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.489656 4835 scope.go:117] "RemoveContainer" containerID="8bbce4dbbfb16007441f36b548b83911325a4fc08309769f63d28a1955c6bfd5" Feb 16 15:11:05 crc kubenswrapper[4835]: E0216 15:11:05.489989 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bbce4dbbfb16007441f36b548b83911325a4fc08309769f63d28a1955c6bfd5\": container with ID starting with 8bbce4dbbfb16007441f36b548b83911325a4fc08309769f63d28a1955c6bfd5 not found: ID does not exist" containerID="8bbce4dbbfb16007441f36b548b83911325a4fc08309769f63d28a1955c6bfd5" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.490051 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bbce4dbbfb16007441f36b548b83911325a4fc08309769f63d28a1955c6bfd5"} err="failed to get container status \"8bbce4dbbfb16007441f36b548b83911325a4fc08309769f63d28a1955c6bfd5\": rpc error: code = NotFound desc = could not find container \"8bbce4dbbfb16007441f36b548b83911325a4fc08309769f63d28a1955c6bfd5\": container with ID starting with 8bbce4dbbfb16007441f36b548b83911325a4fc08309769f63d28a1955c6bfd5 not found: ID does not exist" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.490094 4835 scope.go:117] "RemoveContainer" containerID="3f7eaf8015c143b4922c419dc9e5fc87e6bcccd91af486b445210cb0be86390e" Feb 16 15:11:05 crc kubenswrapper[4835]: E0216 15:11:05.490465 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f7eaf8015c143b4922c419dc9e5fc87e6bcccd91af486b445210cb0be86390e\": container with ID starting with 3f7eaf8015c143b4922c419dc9e5fc87e6bcccd91af486b445210cb0be86390e not found: ID does not exist" containerID="3f7eaf8015c143b4922c419dc9e5fc87e6bcccd91af486b445210cb0be86390e" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.490487 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f7eaf8015c143b4922c419dc9e5fc87e6bcccd91af486b445210cb0be86390e"} err="failed to get container status \"3f7eaf8015c143b4922c419dc9e5fc87e6bcccd91af486b445210cb0be86390e\": rpc error: code = NotFound desc = could not find container \"3f7eaf8015c143b4922c419dc9e5fc87e6bcccd91af486b445210cb0be86390e\": container with ID starting with 3f7eaf8015c143b4922c419dc9e5fc87e6bcccd91af486b445210cb0be86390e not found: ID does not exist" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.490521 4835 scope.go:117] "RemoveContainer" containerID="bf5a5b48fce6501f49e0e9d653286ed27683965c6d1aca9d669f3bc4a59cb941" Feb 16 15:11:05 crc kubenswrapper[4835]: E0216 15:11:05.490810 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf5a5b48fce6501f49e0e9d653286ed27683965c6d1aca9d669f3bc4a59cb941\": container with ID starting with bf5a5b48fce6501f49e0e9d653286ed27683965c6d1aca9d669f3bc4a59cb941 not found: ID does not exist" containerID="bf5a5b48fce6501f49e0e9d653286ed27683965c6d1aca9d669f3bc4a59cb941" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.490849 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf5a5b48fce6501f49e0e9d653286ed27683965c6d1aca9d669f3bc4a59cb941"} err="failed to get container status \"bf5a5b48fce6501f49e0e9d653286ed27683965c6d1aca9d669f3bc4a59cb941\": rpc error: code = NotFound desc = could not find container \"bf5a5b48fce6501f49e0e9d653286ed27683965c6d1aca9d669f3bc4a59cb941\": container with ID starting with bf5a5b48fce6501f49e0e9d653286ed27683965c6d1aca9d669f3bc4a59cb941 not found: ID does not exist" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.791987 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-65ff5df46b-p9jlz"] Feb 16 15:11:05 crc kubenswrapper[4835]: E0216 15:11:05.792419 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76be94b7-5f32-478a-81a6-51758b5f7280" containerName="registry-server" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.792487 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="76be94b7-5f32-478a-81a6-51758b5f7280" containerName="registry-server" Feb 16 15:11:05 crc kubenswrapper[4835]: E0216 15:11:05.792611 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76be94b7-5f32-478a-81a6-51758b5f7280" containerName="extract-content" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.792709 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="76be94b7-5f32-478a-81a6-51758b5f7280" containerName="extract-content" Feb 16 15:11:05 crc kubenswrapper[4835]: E0216 15:11:05.792789 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c28e183-5341-482b-9104-4ca0b17d4f3c" containerName="oauth-openshift" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.792862 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c28e183-5341-482b-9104-4ca0b17d4f3c" containerName="oauth-openshift" Feb 16 15:11:05 crc kubenswrapper[4835]: E0216 15:11:05.792948 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76be94b7-5f32-478a-81a6-51758b5f7280" containerName="extract-utilities" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.793007 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="76be94b7-5f32-478a-81a6-51758b5f7280" containerName="extract-utilities" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.793175 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="76be94b7-5f32-478a-81a6-51758b5f7280" containerName="registry-server" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.793251 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c28e183-5341-482b-9104-4ca0b17d4f3c" containerName="oauth-openshift" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.793719 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.800698 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.801367 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.801547 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.801999 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.802172 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.802724 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.803194 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.804889 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.805196 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.805501 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.805548 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.806066 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.807863 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65ff5df46b-p9jlz"] Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.812448 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.814228 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.820087 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.825784 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.826063 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.826194 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-router-certs\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.826320 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-service-ca\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.826432 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.826686 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-audit-policies\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.826828 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-login\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.826949 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.827069 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5pr2\" (UniqueName: \"kubernetes.io/projected/7e597add-6065-48a9-85e1-06530d981505-kube-api-access-h5pr2\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.827180 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-session\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.827297 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.827411 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e597add-6065-48a9-85e1-06530d981505-audit-dir\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.827551 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.827689 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-error\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.928470 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.928513 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5pr2\" (UniqueName: \"kubernetes.io/projected/7e597add-6065-48a9-85e1-06530d981505-kube-api-access-h5pr2\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.928552 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-session\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.928577 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.928597 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e597add-6065-48a9-85e1-06530d981505-audit-dir\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.928616 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.928670 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-error\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.928700 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.928729 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-router-certs\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.928747 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.928765 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-service-ca\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.928785 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.928807 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-audit-policies\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.928827 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-login\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.928997 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e597add-6065-48a9-85e1-06530d981505-audit-dir\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.929911 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.929921 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-service-ca\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.930032 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-audit-policies\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.930121 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.933338 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-session\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.933657 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.933872 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.934189 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-error\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.935324 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.935608 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-login\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.935813 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.937493 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-router-certs\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:05 crc kubenswrapper[4835]: I0216 15:11:05.952523 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5pr2\" (UniqueName: \"kubernetes.io/projected/7e597add-6065-48a9-85e1-06530d981505-kube-api-access-h5pr2\") pod \"oauth-openshift-65ff5df46b-p9jlz\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:06 crc kubenswrapper[4835]: I0216 15:11:06.106005 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:06 crc kubenswrapper[4835]: I0216 15:11:06.483246 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65ff5df46b-p9jlz"] Feb 16 15:11:06 crc kubenswrapper[4835]: W0216 15:11:06.494765 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e597add_6065_48a9_85e1_06530d981505.slice/crio-6995a85130d423d65ba65203ea719550244dc2bd1015e811886244e761c14305 WatchSource:0}: Error finding container 6995a85130d423d65ba65203ea719550244dc2bd1015e811886244e761c14305: Status 404 returned error can't find the container with id 6995a85130d423d65ba65203ea719550244dc2bd1015e811886244e761c14305 Feb 16 15:11:06 crc kubenswrapper[4835]: I0216 15:11:06.907485 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2j7l9"] Feb 16 15:11:06 crc kubenswrapper[4835]: I0216 15:11:06.907997 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2j7l9" podUID="2498fe6c-9af0-4225-8450-558085a67825" containerName="registry-server" containerID="cri-o://a1624364cf99f895c702c61701dafda4a8834c269f4c46d01b4b8b68f7f72c95" gracePeriod=2 Feb 16 15:11:07 crc kubenswrapper[4835]: I0216 15:11:07.386200 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76be94b7-5f32-478a-81a6-51758b5f7280" path="/var/lib/kubelet/pods/76be94b7-5f32-478a-81a6-51758b5f7280/volumes" Feb 16 15:11:07 crc kubenswrapper[4835]: I0216 15:11:07.448807 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" event={"ID":"7e597add-6065-48a9-85e1-06530d981505","Type":"ContainerStarted","Data":"194d8797e9bcdf86ea818e8f9f314f7a196d906934a3747a2626162dab8bce04"} Feb 16 15:11:07 crc kubenswrapper[4835]: I0216 15:11:07.448860 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" event={"ID":"7e597add-6065-48a9-85e1-06530d981505","Type":"ContainerStarted","Data":"6995a85130d423d65ba65203ea719550244dc2bd1015e811886244e761c14305"} Feb 16 15:11:07 crc kubenswrapper[4835]: I0216 15:11:07.885765 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2j7l9" Feb 16 15:11:07 crc kubenswrapper[4835]: I0216 15:11:07.957434 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2498fe6c-9af0-4225-8450-558085a67825-utilities\") pod \"2498fe6c-9af0-4225-8450-558085a67825\" (UID: \"2498fe6c-9af0-4225-8450-558085a67825\") " Feb 16 15:11:07 crc kubenswrapper[4835]: I0216 15:11:07.957514 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2498fe6c-9af0-4225-8450-558085a67825-catalog-content\") pod \"2498fe6c-9af0-4225-8450-558085a67825\" (UID: \"2498fe6c-9af0-4225-8450-558085a67825\") " Feb 16 15:11:07 crc kubenswrapper[4835]: I0216 15:11:07.957625 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr72v\" (UniqueName: \"kubernetes.io/projected/2498fe6c-9af0-4225-8450-558085a67825-kube-api-access-mr72v\") pod \"2498fe6c-9af0-4225-8450-558085a67825\" (UID: \"2498fe6c-9af0-4225-8450-558085a67825\") " Feb 16 15:11:07 crc kubenswrapper[4835]: I0216 15:11:07.959514 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2498fe6c-9af0-4225-8450-558085a67825-utilities" (OuterVolumeSpecName: "utilities") pod "2498fe6c-9af0-4225-8450-558085a67825" (UID: "2498fe6c-9af0-4225-8450-558085a67825"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:11:07 crc kubenswrapper[4835]: I0216 15:11:07.964851 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2498fe6c-9af0-4225-8450-558085a67825-kube-api-access-mr72v" (OuterVolumeSpecName: "kube-api-access-mr72v") pod "2498fe6c-9af0-4225-8450-558085a67825" (UID: "2498fe6c-9af0-4225-8450-558085a67825"). InnerVolumeSpecName "kube-api-access-mr72v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.059374 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr72v\" (UniqueName: \"kubernetes.io/projected/2498fe6c-9af0-4225-8450-558085a67825-kube-api-access-mr72v\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.059406 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2498fe6c-9af0-4225-8450-558085a67825-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.092931 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2498fe6c-9af0-4225-8450-558085a67825-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2498fe6c-9af0-4225-8450-558085a67825" (UID: "2498fe6c-9af0-4225-8450-558085a67825"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.161434 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2498fe6c-9af0-4225-8450-558085a67825-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.460997 4835 generic.go:334] "Generic (PLEG): container finished" podID="2498fe6c-9af0-4225-8450-558085a67825" containerID="a1624364cf99f895c702c61701dafda4a8834c269f4c46d01b4b8b68f7f72c95" exitCode=0 Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.461324 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2j7l9" event={"ID":"2498fe6c-9af0-4225-8450-558085a67825","Type":"ContainerDied","Data":"a1624364cf99f895c702c61701dafda4a8834c269f4c46d01b4b8b68f7f72c95"} Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.461431 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2j7l9" event={"ID":"2498fe6c-9af0-4225-8450-558085a67825","Type":"ContainerDied","Data":"68154f7b236b0125ba0172ca5e100da126a4b77e24a9d177f79b60e5488fe97e"} Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.461500 4835 scope.go:117] "RemoveContainer" containerID="a1624364cf99f895c702c61701dafda4a8834c269f4c46d01b4b8b68f7f72c95" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.461352 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2j7l9" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.462393 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.469981 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.483915 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" podStartSLOduration=30.483901685 podStartE2EDuration="30.483901685s" podCreationTimestamp="2026-02-16 15:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:11:08.48229346 +0000 UTC m=+217.774286395" watchObservedRunningTime="2026-02-16 15:11:08.483901685 +0000 UTC m=+217.775894580" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.498450 4835 scope.go:117] "RemoveContainer" containerID="b8af7387b6f4e2441a6128fdb92e691a81a608733f98a83803e2b20bec2af7e0" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.540396 4835 scope.go:117] "RemoveContainer" containerID="aaa29dff2218383817ca3d3c7c3b38e4dc83a4f64e03b8a9da392a9f1acca9df" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.563783 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2j7l9"] Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.569025 4835 scope.go:117] "RemoveContainer" containerID="a1624364cf99f895c702c61701dafda4a8834c269f4c46d01b4b8b68f7f72c95" Feb 16 15:11:08 crc kubenswrapper[4835]: E0216 15:11:08.570141 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1624364cf99f895c702c61701dafda4a8834c269f4c46d01b4b8b68f7f72c95\": container with ID starting with a1624364cf99f895c702c61701dafda4a8834c269f4c46d01b4b8b68f7f72c95 not found: ID does not exist" containerID="a1624364cf99f895c702c61701dafda4a8834c269f4c46d01b4b8b68f7f72c95" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.570186 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1624364cf99f895c702c61701dafda4a8834c269f4c46d01b4b8b68f7f72c95"} err="failed to get container status \"a1624364cf99f895c702c61701dafda4a8834c269f4c46d01b4b8b68f7f72c95\": rpc error: code = NotFound desc = could not find container \"a1624364cf99f895c702c61701dafda4a8834c269f4c46d01b4b8b68f7f72c95\": container with ID starting with a1624364cf99f895c702c61701dafda4a8834c269f4c46d01b4b8b68f7f72c95 not found: ID does not exist" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.570219 4835 scope.go:117] "RemoveContainer" containerID="b8af7387b6f4e2441a6128fdb92e691a81a608733f98a83803e2b20bec2af7e0" Feb 16 15:11:08 crc kubenswrapper[4835]: E0216 15:11:08.570700 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8af7387b6f4e2441a6128fdb92e691a81a608733f98a83803e2b20bec2af7e0\": container with ID starting with b8af7387b6f4e2441a6128fdb92e691a81a608733f98a83803e2b20bec2af7e0 not found: ID does not exist" containerID="b8af7387b6f4e2441a6128fdb92e691a81a608733f98a83803e2b20bec2af7e0" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.570754 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8af7387b6f4e2441a6128fdb92e691a81a608733f98a83803e2b20bec2af7e0"} err="failed to get container status \"b8af7387b6f4e2441a6128fdb92e691a81a608733f98a83803e2b20bec2af7e0\": rpc error: code = NotFound desc = could not find container \"b8af7387b6f4e2441a6128fdb92e691a81a608733f98a83803e2b20bec2af7e0\": container with ID starting with b8af7387b6f4e2441a6128fdb92e691a81a608733f98a83803e2b20bec2af7e0 not found: ID does not exist" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.570808 4835 scope.go:117] "RemoveContainer" containerID="aaa29dff2218383817ca3d3c7c3b38e4dc83a4f64e03b8a9da392a9f1acca9df" Feb 16 15:11:08 crc kubenswrapper[4835]: E0216 15:11:08.571180 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaa29dff2218383817ca3d3c7c3b38e4dc83a4f64e03b8a9da392a9f1acca9df\": container with ID starting with aaa29dff2218383817ca3d3c7c3b38e4dc83a4f64e03b8a9da392a9f1acca9df not found: ID does not exist" containerID="aaa29dff2218383817ca3d3c7c3b38e4dc83a4f64e03b8a9da392a9f1acca9df" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.571234 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa29dff2218383817ca3d3c7c3b38e4dc83a4f64e03b8a9da392a9f1acca9df"} err="failed to get container status \"aaa29dff2218383817ca3d3c7c3b38e4dc83a4f64e03b8a9da392a9f1acca9df\": rpc error: code = NotFound desc = could not find container \"aaa29dff2218383817ca3d3c7c3b38e4dc83a4f64e03b8a9da392a9f1acca9df\": container with ID starting with aaa29dff2218383817ca3d3c7c3b38e4dc83a4f64e03b8a9da392a9f1acca9df not found: ID does not exist" Feb 16 15:11:08 crc kubenswrapper[4835]: I0216 15:11:08.572437 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2j7l9"] Feb 16 15:11:09 crc kubenswrapper[4835]: I0216 15:11:09.384972 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2498fe6c-9af0-4225-8450-558085a67825" path="/var/lib/kubelet/pods/2498fe6c-9af0-4225-8450-558085a67825/volumes" Feb 16 15:11:10 crc kubenswrapper[4835]: I0216 15:11:10.674406 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kxxft" Feb 16 15:11:10 crc kubenswrapper[4835]: I0216 15:11:10.912755 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kxxft"] Feb 16 15:11:11 crc kubenswrapper[4835]: I0216 15:11:11.486115 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kxxft" podUID="c7361241-f3c4-483a-9aa8-d1af72ab348b" containerName="registry-server" containerID="cri-o://1d5139af19372bf7d701aa7d1f681d04da0c77b23fa048a6acd35084b611912e" gracePeriod=2 Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.013355 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxxft" Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.210941 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7361241-f3c4-483a-9aa8-d1af72ab348b-catalog-content\") pod \"c7361241-f3c4-483a-9aa8-d1af72ab348b\" (UID: \"c7361241-f3c4-483a-9aa8-d1af72ab348b\") " Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.211063 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7361241-f3c4-483a-9aa8-d1af72ab348b-utilities\") pod \"c7361241-f3c4-483a-9aa8-d1af72ab348b\" (UID: \"c7361241-f3c4-483a-9aa8-d1af72ab348b\") " Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.211129 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m9mp\" (UniqueName: \"kubernetes.io/projected/c7361241-f3c4-483a-9aa8-d1af72ab348b-kube-api-access-2m9mp\") pod \"c7361241-f3c4-483a-9aa8-d1af72ab348b\" (UID: \"c7361241-f3c4-483a-9aa8-d1af72ab348b\") " Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.212048 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7361241-f3c4-483a-9aa8-d1af72ab348b-utilities" (OuterVolumeSpecName: "utilities") pod "c7361241-f3c4-483a-9aa8-d1af72ab348b" (UID: "c7361241-f3c4-483a-9aa8-d1af72ab348b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.216493 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7361241-f3c4-483a-9aa8-d1af72ab348b-kube-api-access-2m9mp" (OuterVolumeSpecName: "kube-api-access-2m9mp") pod "c7361241-f3c4-483a-9aa8-d1af72ab348b" (UID: "c7361241-f3c4-483a-9aa8-d1af72ab348b"). InnerVolumeSpecName "kube-api-access-2m9mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.258284 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7361241-f3c4-483a-9aa8-d1af72ab348b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7361241-f3c4-483a-9aa8-d1af72ab348b" (UID: "c7361241-f3c4-483a-9aa8-d1af72ab348b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.312811 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7361241-f3c4-483a-9aa8-d1af72ab348b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.313067 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7361241-f3c4-483a-9aa8-d1af72ab348b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.313149 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m9mp\" (UniqueName: \"kubernetes.io/projected/c7361241-f3c4-483a-9aa8-d1af72ab348b-kube-api-access-2m9mp\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.495047 4835 generic.go:334] "Generic (PLEG): container finished" podID="c7361241-f3c4-483a-9aa8-d1af72ab348b" containerID="1d5139af19372bf7d701aa7d1f681d04da0c77b23fa048a6acd35084b611912e" exitCode=0 Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.495140 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxxft" event={"ID":"c7361241-f3c4-483a-9aa8-d1af72ab348b","Type":"ContainerDied","Data":"1d5139af19372bf7d701aa7d1f681d04da0c77b23fa048a6acd35084b611912e"} Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.495456 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxxft" event={"ID":"c7361241-f3c4-483a-9aa8-d1af72ab348b","Type":"ContainerDied","Data":"aa79bdb7f9483bfe6a8d60dd0ac78bde56df4c0e8212bedc4eabab9ecc34dbb4"} Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.495484 4835 scope.go:117] "RemoveContainer" containerID="1d5139af19372bf7d701aa7d1f681d04da0c77b23fa048a6acd35084b611912e" Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.495181 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxxft" Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.518055 4835 scope.go:117] "RemoveContainer" containerID="ffd7c58595efc15696ba9718c64657bc120fa834f1839316561108f8c3b6a44f" Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.526060 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kxxft"] Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.531886 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kxxft"] Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.547261 4835 scope.go:117] "RemoveContainer" containerID="a3d908d4c73f9c8295540a98f1dbc80c0d24afd9b67c9861f0614c045e24cdac" Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.564475 4835 scope.go:117] "RemoveContainer" containerID="1d5139af19372bf7d701aa7d1f681d04da0c77b23fa048a6acd35084b611912e" Feb 16 15:11:12 crc kubenswrapper[4835]: E0216 15:11:12.564868 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d5139af19372bf7d701aa7d1f681d04da0c77b23fa048a6acd35084b611912e\": container with ID starting with 1d5139af19372bf7d701aa7d1f681d04da0c77b23fa048a6acd35084b611912e not found: ID does not exist" containerID="1d5139af19372bf7d701aa7d1f681d04da0c77b23fa048a6acd35084b611912e" Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.564908 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d5139af19372bf7d701aa7d1f681d04da0c77b23fa048a6acd35084b611912e"} err="failed to get container status \"1d5139af19372bf7d701aa7d1f681d04da0c77b23fa048a6acd35084b611912e\": rpc error: code = NotFound desc = could not find container \"1d5139af19372bf7d701aa7d1f681d04da0c77b23fa048a6acd35084b611912e\": container with ID starting with 1d5139af19372bf7d701aa7d1f681d04da0c77b23fa048a6acd35084b611912e not found: ID does not exist" Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.564933 4835 scope.go:117] "RemoveContainer" containerID="ffd7c58595efc15696ba9718c64657bc120fa834f1839316561108f8c3b6a44f" Feb 16 15:11:12 crc kubenswrapper[4835]: E0216 15:11:12.565305 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffd7c58595efc15696ba9718c64657bc120fa834f1839316561108f8c3b6a44f\": container with ID starting with ffd7c58595efc15696ba9718c64657bc120fa834f1839316561108f8c3b6a44f not found: ID does not exist" containerID="ffd7c58595efc15696ba9718c64657bc120fa834f1839316561108f8c3b6a44f" Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.565328 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffd7c58595efc15696ba9718c64657bc120fa834f1839316561108f8c3b6a44f"} err="failed to get container status \"ffd7c58595efc15696ba9718c64657bc120fa834f1839316561108f8c3b6a44f\": rpc error: code = NotFound desc = could not find container \"ffd7c58595efc15696ba9718c64657bc120fa834f1839316561108f8c3b6a44f\": container with ID starting with ffd7c58595efc15696ba9718c64657bc120fa834f1839316561108f8c3b6a44f not found: ID does not exist" Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.565341 4835 scope.go:117] "RemoveContainer" containerID="a3d908d4c73f9c8295540a98f1dbc80c0d24afd9b67c9861f0614c045e24cdac" Feb 16 15:11:12 crc kubenswrapper[4835]: E0216 15:11:12.565665 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3d908d4c73f9c8295540a98f1dbc80c0d24afd9b67c9861f0614c045e24cdac\": container with ID starting with a3d908d4c73f9c8295540a98f1dbc80c0d24afd9b67c9861f0614c045e24cdac not found: ID does not exist" containerID="a3d908d4c73f9c8295540a98f1dbc80c0d24afd9b67c9861f0614c045e24cdac" Feb 16 15:11:12 crc kubenswrapper[4835]: I0216 15:11:12.565699 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3d908d4c73f9c8295540a98f1dbc80c0d24afd9b67c9861f0614c045e24cdac"} err="failed to get container status \"a3d908d4c73f9c8295540a98f1dbc80c0d24afd9b67c9861f0614c045e24cdac\": rpc error: code = NotFound desc = could not find container \"a3d908d4c73f9c8295540a98f1dbc80c0d24afd9b67c9861f0614c045e24cdac\": container with ID starting with a3d908d4c73f9c8295540a98f1dbc80c0d24afd9b67c9861f0614c045e24cdac not found: ID does not exist" Feb 16 15:11:13 crc kubenswrapper[4835]: I0216 15:11:13.385728 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7361241-f3c4-483a-9aa8-d1af72ab348b" path="/var/lib/kubelet/pods/c7361241-f3c4-483a-9aa8-d1af72ab348b/volumes" Feb 16 15:11:13 crc kubenswrapper[4835]: I0216 15:11:13.494767 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2blp8" Feb 16 15:11:13 crc kubenswrapper[4835]: I0216 15:11:13.543622 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2blp8" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.181817 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-548fffd6f9-pqwdr"] Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.182009 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" podUID="0b37040b-1fa7-4f83-9292-c49a3e4057cb" containerName="controller-manager" containerID="cri-o://688d2800af039e4e16a28b809eb18ea07f2e9e65c5d8b018dcd5c21c115e6411" gracePeriod=30 Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.281598 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2"] Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.282147 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" podUID="09072237-b063-4c9c-91f4-e7f771956438" containerName="route-controller-manager" containerID="cri-o://4b2d139459ed796b2c142dc7c0c4a2dd0806c3192e0c950df3fe16115612ed73" gracePeriod=30 Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.512326 4835 generic.go:334] "Generic (PLEG): container finished" podID="0b37040b-1fa7-4f83-9292-c49a3e4057cb" containerID="688d2800af039e4e16a28b809eb18ea07f2e9e65c5d8b018dcd5c21c115e6411" exitCode=0 Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.512364 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" event={"ID":"0b37040b-1fa7-4f83-9292-c49a3e4057cb","Type":"ContainerDied","Data":"688d2800af039e4e16a28b809eb18ea07f2e9e65c5d8b018dcd5c21c115e6411"} Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.522367 4835 generic.go:334] "Generic (PLEG): container finished" podID="09072237-b063-4c9c-91f4-e7f771956438" containerID="4b2d139459ed796b2c142dc7c0c4a2dd0806c3192e0c950df3fe16115612ed73" exitCode=0 Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.522405 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" event={"ID":"09072237-b063-4c9c-91f4-e7f771956438","Type":"ContainerDied","Data":"4b2d139459ed796b2c142dc7c0c4a2dd0806c3192e0c950df3fe16115612ed73"} Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.669502 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.696445 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.747296 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09072237-b063-4c9c-91f4-e7f771956438-serving-cert\") pod \"09072237-b063-4c9c-91f4-e7f771956438\" (UID: \"09072237-b063-4c9c-91f4-e7f771956438\") " Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.747369 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09072237-b063-4c9c-91f4-e7f771956438-config\") pod \"09072237-b063-4c9c-91f4-e7f771956438\" (UID: \"09072237-b063-4c9c-91f4-e7f771956438\") " Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.747395 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvxqc\" (UniqueName: \"kubernetes.io/projected/09072237-b063-4c9c-91f4-e7f771956438-kube-api-access-wvxqc\") pod \"09072237-b063-4c9c-91f4-e7f771956438\" (UID: \"09072237-b063-4c9c-91f4-e7f771956438\") " Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.747421 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09072237-b063-4c9c-91f4-e7f771956438-client-ca\") pod \"09072237-b063-4c9c-91f4-e7f771956438\" (UID: \"09072237-b063-4c9c-91f4-e7f771956438\") " Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.748279 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09072237-b063-4c9c-91f4-e7f771956438-config" (OuterVolumeSpecName: "config") pod "09072237-b063-4c9c-91f4-e7f771956438" (UID: "09072237-b063-4c9c-91f4-e7f771956438"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.748364 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09072237-b063-4c9c-91f4-e7f771956438-client-ca" (OuterVolumeSpecName: "client-ca") pod "09072237-b063-4c9c-91f4-e7f771956438" (UID: "09072237-b063-4c9c-91f4-e7f771956438"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.757169 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09072237-b063-4c9c-91f4-e7f771956438-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09072237-b063-4c9c-91f4-e7f771956438" (UID: "09072237-b063-4c9c-91f4-e7f771956438"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.762851 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09072237-b063-4c9c-91f4-e7f771956438-kube-api-access-wvxqc" (OuterVolumeSpecName: "kube-api-access-wvxqc") pod "09072237-b063-4c9c-91f4-e7f771956438" (UID: "09072237-b063-4c9c-91f4-e7f771956438"). InnerVolumeSpecName "kube-api-access-wvxqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.848294 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b37040b-1fa7-4f83-9292-c49a3e4057cb-serving-cert\") pod \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.848369 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-config\") pod \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.848409 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-client-ca\") pod \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.848462 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swj2t\" (UniqueName: \"kubernetes.io/projected/0b37040b-1fa7-4f83-9292-c49a3e4057cb-kube-api-access-swj2t\") pod \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.848499 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-proxy-ca-bundles\") pod \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\" (UID: \"0b37040b-1fa7-4f83-9292-c49a3e4057cb\") " Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.849337 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0b37040b-1fa7-4f83-9292-c49a3e4057cb" (UID: "0b37040b-1fa7-4f83-9292-c49a3e4057cb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.849473 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-client-ca" (OuterVolumeSpecName: "client-ca") pod "0b37040b-1fa7-4f83-9292-c49a3e4057cb" (UID: "0b37040b-1fa7-4f83-9292-c49a3e4057cb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.849521 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-config" (OuterVolumeSpecName: "config") pod "0b37040b-1fa7-4f83-9292-c49a3e4057cb" (UID: "0b37040b-1fa7-4f83-9292-c49a3e4057cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.849785 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.849806 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09072237-b063-4c9c-91f4-e7f771956438-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.849817 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.849827 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09072237-b063-4c9c-91f4-e7f771956438-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.849836 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvxqc\" (UniqueName: \"kubernetes.io/projected/09072237-b063-4c9c-91f4-e7f771956438-kube-api-access-wvxqc\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.849846 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09072237-b063-4c9c-91f4-e7f771956438-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.849854 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b37040b-1fa7-4f83-9292-c49a3e4057cb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.851424 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b37040b-1fa7-4f83-9292-c49a3e4057cb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b37040b-1fa7-4f83-9292-c49a3e4057cb" (UID: "0b37040b-1fa7-4f83-9292-c49a3e4057cb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.851781 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b37040b-1fa7-4f83-9292-c49a3e4057cb-kube-api-access-swj2t" (OuterVolumeSpecName: "kube-api-access-swj2t") pod "0b37040b-1fa7-4f83-9292-c49a3e4057cb" (UID: "0b37040b-1fa7-4f83-9292-c49a3e4057cb"). InnerVolumeSpecName "kube-api-access-swj2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.950524 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swj2t\" (UniqueName: \"kubernetes.io/projected/0b37040b-1fa7-4f83-9292-c49a3e4057cb-kube-api-access-swj2t\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:14 crc kubenswrapper[4835]: I0216 15:11:14.950583 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b37040b-1fa7-4f83-9292-c49a3e4057cb-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.530966 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.530958 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-548fffd6f9-pqwdr" event={"ID":"0b37040b-1fa7-4f83-9292-c49a3e4057cb","Type":"ContainerDied","Data":"7daf728f308fc83bbee8cb3aaf9b78642f4c78da57a91743fa42d56311153201"} Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.531429 4835 scope.go:117] "RemoveContainer" containerID="688d2800af039e4e16a28b809eb18ea07f2e9e65c5d8b018dcd5c21c115e6411" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.533167 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.533167 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2" event={"ID":"09072237-b063-4c9c-91f4-e7f771956438","Type":"ContainerDied","Data":"93a46383c1d35f90b38e1de5c3fa0b2dbfe29626d70ef14cee9eb85ab46940db"} Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.570787 4835 scope.go:117] "RemoveContainer" containerID="4b2d139459ed796b2c142dc7c0c4a2dd0806c3192e0c950df3fe16115612ed73" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.580716 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-548fffd6f9-pqwdr"] Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.598291 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-548fffd6f9-pqwdr"] Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.598355 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2"] Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.608730 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7874658f57-kxdh2"] Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.801068 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7"] Feb 16 15:11:15 crc kubenswrapper[4835]: E0216 15:11:15.801393 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7361241-f3c4-483a-9aa8-d1af72ab348b" containerName="extract-content" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.801413 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7361241-f3c4-483a-9aa8-d1af72ab348b" containerName="extract-content" Feb 16 15:11:15 crc kubenswrapper[4835]: E0216 15:11:15.801432 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7361241-f3c4-483a-9aa8-d1af72ab348b" containerName="extract-utilities" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.801444 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7361241-f3c4-483a-9aa8-d1af72ab348b" containerName="extract-utilities" Feb 16 15:11:15 crc kubenswrapper[4835]: E0216 15:11:15.801459 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b37040b-1fa7-4f83-9292-c49a3e4057cb" containerName="controller-manager" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.801473 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b37040b-1fa7-4f83-9292-c49a3e4057cb" containerName="controller-manager" Feb 16 15:11:15 crc kubenswrapper[4835]: E0216 15:11:15.801491 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2498fe6c-9af0-4225-8450-558085a67825" containerName="registry-server" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.801503 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2498fe6c-9af0-4225-8450-558085a67825" containerName="registry-server" Feb 16 15:11:15 crc kubenswrapper[4835]: E0216 15:11:15.801517 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7361241-f3c4-483a-9aa8-d1af72ab348b" containerName="registry-server" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.801568 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7361241-f3c4-483a-9aa8-d1af72ab348b" containerName="registry-server" Feb 16 15:11:15 crc kubenswrapper[4835]: E0216 15:11:15.801592 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09072237-b063-4c9c-91f4-e7f771956438" containerName="route-controller-manager" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.801604 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="09072237-b063-4c9c-91f4-e7f771956438" containerName="route-controller-manager" Feb 16 15:11:15 crc kubenswrapper[4835]: E0216 15:11:15.801619 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2498fe6c-9af0-4225-8450-558085a67825" containerName="extract-utilities" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.801630 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2498fe6c-9af0-4225-8450-558085a67825" containerName="extract-utilities" Feb 16 15:11:15 crc kubenswrapper[4835]: E0216 15:11:15.801654 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2498fe6c-9af0-4225-8450-558085a67825" containerName="extract-content" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.801666 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2498fe6c-9af0-4225-8450-558085a67825" containerName="extract-content" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.801832 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2498fe6c-9af0-4225-8450-558085a67825" containerName="registry-server" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.801850 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7361241-f3c4-483a-9aa8-d1af72ab348b" containerName="registry-server" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.801874 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="09072237-b063-4c9c-91f4-e7f771956438" containerName="route-controller-manager" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.801896 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b37040b-1fa7-4f83-9292-c49a3e4057cb" containerName="controller-manager" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.802446 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.804913 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d56d9c654-sbnkv"] Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.805486 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.807445 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.808188 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.808717 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.809364 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.809473 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.809489 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.809780 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.809811 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.809936 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.810242 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.814579 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.818027 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.821719 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.821846 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7"] Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.829599 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d56d9c654-sbnkv"] Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.964456 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ebb46a3-2567-4218-a705-b1ac0a728329-proxy-ca-bundles\") pod \"controller-manager-5d56d9c654-sbnkv\" (UID: \"0ebb46a3-2567-4218-a705-b1ac0a728329\") " pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.964520 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da689468-9abc-4190-872c-4837a32a3544-serving-cert\") pod \"route-controller-manager-65f5b556df-rdvh7\" (UID: \"da689468-9abc-4190-872c-4837a32a3544\") " pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.964681 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqghj\" (UniqueName: \"kubernetes.io/projected/0ebb46a3-2567-4218-a705-b1ac0a728329-kube-api-access-mqghj\") pod \"controller-manager-5d56d9c654-sbnkv\" (UID: \"0ebb46a3-2567-4218-a705-b1ac0a728329\") " pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.964787 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da689468-9abc-4190-872c-4837a32a3544-client-ca\") pod \"route-controller-manager-65f5b556df-rdvh7\" (UID: \"da689468-9abc-4190-872c-4837a32a3544\") " pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.964836 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ebb46a3-2567-4218-a705-b1ac0a728329-client-ca\") pod \"controller-manager-5d56d9c654-sbnkv\" (UID: \"0ebb46a3-2567-4218-a705-b1ac0a728329\") " pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.964876 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da689468-9abc-4190-872c-4837a32a3544-config\") pod \"route-controller-manager-65f5b556df-rdvh7\" (UID: \"da689468-9abc-4190-872c-4837a32a3544\") " pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.964892 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebb46a3-2567-4218-a705-b1ac0a728329-config\") pod \"controller-manager-5d56d9c654-sbnkv\" (UID: \"0ebb46a3-2567-4218-a705-b1ac0a728329\") " pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.964968 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ebb46a3-2567-4218-a705-b1ac0a728329-serving-cert\") pod \"controller-manager-5d56d9c654-sbnkv\" (UID: \"0ebb46a3-2567-4218-a705-b1ac0a728329\") " pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:15 crc kubenswrapper[4835]: I0216 15:11:15.965006 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kffs5\" (UniqueName: \"kubernetes.io/projected/da689468-9abc-4190-872c-4837a32a3544-kube-api-access-kffs5\") pod \"route-controller-manager-65f5b556df-rdvh7\" (UID: \"da689468-9abc-4190-872c-4837a32a3544\") " pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.066011 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ebb46a3-2567-4218-a705-b1ac0a728329-proxy-ca-bundles\") pod \"controller-manager-5d56d9c654-sbnkv\" (UID: \"0ebb46a3-2567-4218-a705-b1ac0a728329\") " pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.066045 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da689468-9abc-4190-872c-4837a32a3544-serving-cert\") pod \"route-controller-manager-65f5b556df-rdvh7\" (UID: \"da689468-9abc-4190-872c-4837a32a3544\") " pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.066071 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqghj\" (UniqueName: \"kubernetes.io/projected/0ebb46a3-2567-4218-a705-b1ac0a728329-kube-api-access-mqghj\") pod \"controller-manager-5d56d9c654-sbnkv\" (UID: \"0ebb46a3-2567-4218-a705-b1ac0a728329\") " pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.066095 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da689468-9abc-4190-872c-4837a32a3544-client-ca\") pod \"route-controller-manager-65f5b556df-rdvh7\" (UID: \"da689468-9abc-4190-872c-4837a32a3544\") " pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.066112 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ebb46a3-2567-4218-a705-b1ac0a728329-client-ca\") pod \"controller-manager-5d56d9c654-sbnkv\" (UID: \"0ebb46a3-2567-4218-a705-b1ac0a728329\") " pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.066130 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da689468-9abc-4190-872c-4837a32a3544-config\") pod \"route-controller-manager-65f5b556df-rdvh7\" (UID: \"da689468-9abc-4190-872c-4837a32a3544\") " pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.066167 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebb46a3-2567-4218-a705-b1ac0a728329-config\") pod \"controller-manager-5d56d9c654-sbnkv\" (UID: \"0ebb46a3-2567-4218-a705-b1ac0a728329\") " pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.066188 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ebb46a3-2567-4218-a705-b1ac0a728329-serving-cert\") pod \"controller-manager-5d56d9c654-sbnkv\" (UID: \"0ebb46a3-2567-4218-a705-b1ac0a728329\") " pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.066210 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kffs5\" (UniqueName: \"kubernetes.io/projected/da689468-9abc-4190-872c-4837a32a3544-kube-api-access-kffs5\") pod \"route-controller-manager-65f5b556df-rdvh7\" (UID: \"da689468-9abc-4190-872c-4837a32a3544\") " pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.068437 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ebb46a3-2567-4218-a705-b1ac0a728329-proxy-ca-bundles\") pod \"controller-manager-5d56d9c654-sbnkv\" (UID: \"0ebb46a3-2567-4218-a705-b1ac0a728329\") " pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.068455 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ebb46a3-2567-4218-a705-b1ac0a728329-client-ca\") pod \"controller-manager-5d56d9c654-sbnkv\" (UID: \"0ebb46a3-2567-4218-a705-b1ac0a728329\") " pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.070316 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da689468-9abc-4190-872c-4837a32a3544-client-ca\") pod \"route-controller-manager-65f5b556df-rdvh7\" (UID: \"da689468-9abc-4190-872c-4837a32a3544\") " pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.071561 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ebb46a3-2567-4218-a705-b1ac0a728329-config\") pod \"controller-manager-5d56d9c654-sbnkv\" (UID: \"0ebb46a3-2567-4218-a705-b1ac0a728329\") " pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.071889 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da689468-9abc-4190-872c-4837a32a3544-serving-cert\") pod \"route-controller-manager-65f5b556df-rdvh7\" (UID: \"da689468-9abc-4190-872c-4837a32a3544\") " pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.074775 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da689468-9abc-4190-872c-4837a32a3544-config\") pod \"route-controller-manager-65f5b556df-rdvh7\" (UID: \"da689468-9abc-4190-872c-4837a32a3544\") " pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.075662 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ebb46a3-2567-4218-a705-b1ac0a728329-serving-cert\") pod \"controller-manager-5d56d9c654-sbnkv\" (UID: \"0ebb46a3-2567-4218-a705-b1ac0a728329\") " pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.081606 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kffs5\" (UniqueName: \"kubernetes.io/projected/da689468-9abc-4190-872c-4837a32a3544-kube-api-access-kffs5\") pod \"route-controller-manager-65f5b556df-rdvh7\" (UID: \"da689468-9abc-4190-872c-4837a32a3544\") " pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.099162 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqghj\" (UniqueName: \"kubernetes.io/projected/0ebb46a3-2567-4218-a705-b1ac0a728329-kube-api-access-mqghj\") pod \"controller-manager-5d56d9c654-sbnkv\" (UID: \"0ebb46a3-2567-4218-a705-b1ac0a728329\") " pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.143740 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.145298 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.459734 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d56d9c654-sbnkv"] Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.494022 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7"] Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.552291 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" event={"ID":"da689468-9abc-4190-872c-4837a32a3544","Type":"ContainerStarted","Data":"bea50d16dfbbf7149e9f74bdc92bf12232eae50edf11aaaf71d8a8a3c479c3c3"} Feb 16 15:11:16 crc kubenswrapper[4835]: I0216 15:11:16.560298 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" event={"ID":"0ebb46a3-2567-4218-a705-b1ac0a728329","Type":"ContainerStarted","Data":"db002edce3f2fc2d8ea6ed7f28c1ed0e9283f049dbba1f3f5bb115302819d9e9"} Feb 16 15:11:17 crc kubenswrapper[4835]: I0216 15:11:17.387998 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09072237-b063-4c9c-91f4-e7f771956438" path="/var/lib/kubelet/pods/09072237-b063-4c9c-91f4-e7f771956438/volumes" Feb 16 15:11:17 crc kubenswrapper[4835]: I0216 15:11:17.388706 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b37040b-1fa7-4f83-9292-c49a3e4057cb" path="/var/lib/kubelet/pods/0b37040b-1fa7-4f83-9292-c49a3e4057cb/volumes" Feb 16 15:11:17 crc kubenswrapper[4835]: I0216 15:11:17.567567 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" event={"ID":"0ebb46a3-2567-4218-a705-b1ac0a728329","Type":"ContainerStarted","Data":"45fc1c21f96a90f93ed60ce1f068675fa91e9da886aa7e330f4b1dae1fa1e4f8"} Feb 16 15:11:17 crc kubenswrapper[4835]: I0216 15:11:17.567922 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:17 crc kubenswrapper[4835]: I0216 15:11:17.571427 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" event={"ID":"da689468-9abc-4190-872c-4837a32a3544","Type":"ContainerStarted","Data":"26c4dc98e3764fdbafea7bdf0a462d9dbaae68b60218ad9a93a8f86f73d92dc3"} Feb 16 15:11:17 crc kubenswrapper[4835]: I0216 15:11:17.571739 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" Feb 16 15:11:17 crc kubenswrapper[4835]: I0216 15:11:17.573422 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" Feb 16 15:11:17 crc kubenswrapper[4835]: I0216 15:11:17.581156 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" Feb 16 15:11:17 crc kubenswrapper[4835]: I0216 15:11:17.619281 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d56d9c654-sbnkv" podStartSLOduration=3.619263174 podStartE2EDuration="3.619263174s" podCreationTimestamp="2026-02-16 15:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:11:17.597683823 +0000 UTC m=+226.889676728" watchObservedRunningTime="2026-02-16 15:11:17.619263174 +0000 UTC m=+226.911256069" Feb 16 15:11:17 crc kubenswrapper[4835]: I0216 15:11:17.635290 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-65f5b556df-rdvh7" podStartSLOduration=3.63527444 podStartE2EDuration="3.63527444s" podCreationTimestamp="2026-02-16 15:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:11:17.632645347 +0000 UTC m=+226.924638242" watchObservedRunningTime="2026-02-16 15:11:17.63527444 +0000 UTC m=+226.927267335" Feb 16 15:11:18 crc kubenswrapper[4835]: I0216 15:11:18.586521 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:11:18 crc kubenswrapper[4835]: I0216 15:11:18.586584 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:11:18 crc kubenswrapper[4835]: I0216 15:11:18.586619 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:11:18 crc kubenswrapper[4835]: I0216 15:11:18.586966 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b"} pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:11:18 crc kubenswrapper[4835]: I0216 15:11:18.587013 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" containerID="cri-o://cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b" gracePeriod=600 Feb 16 15:11:19 crc kubenswrapper[4835]: I0216 15:11:19.590683 4835 generic.go:334] "Generic (PLEG): container finished" podID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerID="cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b" exitCode=0 Feb 16 15:11:19 crc kubenswrapper[4835]: I0216 15:11:19.590777 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerDied","Data":"cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b"} Feb 16 15:11:19 crc kubenswrapper[4835]: I0216 15:11:19.591168 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerStarted","Data":"c55d3f5b42809c4991ad19df6589021934ee6a2792c9d5ee4984082ec22f35aa"} Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.661509 4835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.662975 4835 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663005 4835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 15:11:23 crc kubenswrapper[4835]: E0216 15:11:23.663138 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663158 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 15:11:23 crc kubenswrapper[4835]: E0216 15:11:23.663171 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663182 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 15:11:23 crc kubenswrapper[4835]: E0216 15:11:23.663193 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663201 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 15:11:23 crc kubenswrapper[4835]: E0216 15:11:23.663210 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663220 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 15:11:23 crc kubenswrapper[4835]: E0216 15:11:23.663232 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663234 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663241 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 15:11:23 crc kubenswrapper[4835]: E0216 15:11:23.663362 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663372 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 15:11:23 crc kubenswrapper[4835]: E0216 15:11:23.663386 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663396 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 15:11:23 crc kubenswrapper[4835]: E0216 15:11:23.663411 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663421 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663569 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663581 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663591 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663601 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663611 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663628 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663933 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d" gracePeriod=15 Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663985 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8" gracePeriod=15 Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.664008 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76" gracePeriod=15 Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.663964 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19" gracePeriod=15 Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.664112 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.664142 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50" gracePeriod=15 Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.665581 4835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 16 15:11:23 crc kubenswrapper[4835]: E0216 15:11:23.690973 4835 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.175:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.759347 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.759726 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.759746 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.759769 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.759786 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.759816 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.759871 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.760048 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.862438 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.862552 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.863045 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.863073 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.863127 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.863185 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.863208 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.863225 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.863233 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.863252 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.863284 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.863304 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.863337 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.863401 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.863404 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.863450 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:23 crc kubenswrapper[4835]: I0216 15:11:23.991864 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:24 crc kubenswrapper[4835]: E0216 15:11:24.013262 4835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.175:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894c2c02751679d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 15:11:24.012603293 +0000 UTC m=+233.304596198,LastTimestamp:2026-02-16 15:11:24.012603293 +0000 UTC m=+233.304596198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 15:11:24 crc kubenswrapper[4835]: I0216 15:11:24.618816 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"657fa5ad1d41cfc02a74f3a3bdef1fb7235f39bc7ce1eba5e0460bdac99bb7ee"} Feb 16 15:11:24 crc kubenswrapper[4835]: I0216 15:11:24.619219 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"19af24c445905cf208882bb1ee2300b357d36978ad46d0567b4d5c7ce8275e74"} Feb 16 15:11:24 crc kubenswrapper[4835]: E0216 15:11:24.620153 4835 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.175:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:11:24 crc kubenswrapper[4835]: I0216 15:11:24.620796 4835 generic.go:334] "Generic (PLEG): container finished" podID="4699a27a-8190-4caf-bf07-ff741058b280" containerID="213d0a7e19a9f3f094fba0d38b5f5c584f19615510480c98e30dd6849d3de362" exitCode=0 Feb 16 15:11:24 crc kubenswrapper[4835]: I0216 15:11:24.620853 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4699a27a-8190-4caf-bf07-ff741058b280","Type":"ContainerDied","Data":"213d0a7e19a9f3f094fba0d38b5f5c584f19615510480c98e30dd6849d3de362"} Feb 16 15:11:24 crc kubenswrapper[4835]: I0216 15:11:24.621735 4835 status_manager.go:851] "Failed to get status for pod" podUID="4699a27a-8190-4caf-bf07-ff741058b280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.175:6443: connect: connection refused" Feb 16 15:11:24 crc kubenswrapper[4835]: I0216 15:11:24.623316 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 15:11:24 crc kubenswrapper[4835]: I0216 15:11:24.624730 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 15:11:24 crc kubenswrapper[4835]: I0216 15:11:24.625387 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19" exitCode=0 Feb 16 15:11:24 crc kubenswrapper[4835]: I0216 15:11:24.625409 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50" exitCode=0 Feb 16 15:11:24 crc kubenswrapper[4835]: I0216 15:11:24.625417 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8" exitCode=0 Feb 16 15:11:24 crc kubenswrapper[4835]: I0216 15:11:24.625424 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76" exitCode=2 Feb 16 15:11:24 crc kubenswrapper[4835]: I0216 15:11:24.625456 4835 scope.go:117] "RemoveContainer" containerID="21aa6191686780122a77d9a3f7ea0cf28bde0d16bdb45855c63cd15a62cbddf7" Feb 16 15:11:25 crc kubenswrapper[4835]: E0216 15:11:25.576881 4835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.175:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894c2c02751679d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 15:11:24.012603293 +0000 UTC m=+233.304596198,LastTimestamp:2026-02-16 15:11:24.012603293 +0000 UTC m=+233.304596198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 15:11:25 crc kubenswrapper[4835]: I0216 15:11:25.632762 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.029436 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.030608 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.031104 4835 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.175:6443: connect: connection refused" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.031462 4835 status_manager.go:851] "Failed to get status for pod" podUID="4699a27a-8190-4caf-bf07-ff741058b280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.175:6443: connect: connection refused" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.032317 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.032837 4835 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.175:6443: connect: connection refused" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.033080 4835 status_manager.go:851] "Failed to get status for pod" podUID="4699a27a-8190-4caf-bf07-ff741058b280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.175:6443: connect: connection refused" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.104279 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.104365 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.104408 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4699a27a-8190-4caf-bf07-ff741058b280-kubelet-dir\") pod \"4699a27a-8190-4caf-bf07-ff741058b280\" (UID: \"4699a27a-8190-4caf-bf07-ff741058b280\") " Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.104437 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.104474 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.104490 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4699a27a-8190-4caf-bf07-ff741058b280-var-lock\") pod \"4699a27a-8190-4caf-bf07-ff741058b280\" (UID: \"4699a27a-8190-4caf-bf07-ff741058b280\") " Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.104493 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4699a27a-8190-4caf-bf07-ff741058b280-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4699a27a-8190-4caf-bf07-ff741058b280" (UID: "4699a27a-8190-4caf-bf07-ff741058b280"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.104516 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.104525 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4699a27a-8190-4caf-bf07-ff741058b280-kube-api-access\") pod \"4699a27a-8190-4caf-bf07-ff741058b280\" (UID: \"4699a27a-8190-4caf-bf07-ff741058b280\") " Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.104549 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4699a27a-8190-4caf-bf07-ff741058b280-var-lock" (OuterVolumeSpecName: "var-lock") pod "4699a27a-8190-4caf-bf07-ff741058b280" (UID: "4699a27a-8190-4caf-bf07-ff741058b280"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.104595 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.104807 4835 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.104821 4835 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4699a27a-8190-4caf-bf07-ff741058b280-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.104830 4835 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.104839 4835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4699a27a-8190-4caf-bf07-ff741058b280-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.104847 4835 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.110508 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4699a27a-8190-4caf-bf07-ff741058b280-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4699a27a-8190-4caf-bf07-ff741058b280" (UID: "4699a27a-8190-4caf-bf07-ff741058b280"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.206033 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4699a27a-8190-4caf-bf07-ff741058b280-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.639477 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4699a27a-8190-4caf-bf07-ff741058b280","Type":"ContainerDied","Data":"1ba6e7a633f96cff42b31bdd4ea6e2318844a8c91b14f2a48bd11d1f192a5e76"} Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.639517 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ba6e7a633f96cff42b31bdd4ea6e2318844a8c91b14f2a48bd11d1f192a5e76" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.639615 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.644260 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.645055 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d" exitCode=0 Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.645105 4835 scope.go:117] "RemoveContainer" containerID="acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.645228 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.662516 4835 scope.go:117] "RemoveContainer" containerID="b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.670647 4835 status_manager.go:851] "Failed to get status for pod" podUID="4699a27a-8190-4caf-bf07-ff741058b280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.175:6443: connect: connection refused" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.670926 4835 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.175:6443: connect: connection refused" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.672860 4835 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.175:6443: connect: connection refused" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.673111 4835 status_manager.go:851] "Failed to get status for pod" podUID="4699a27a-8190-4caf-bf07-ff741058b280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.175:6443: connect: connection refused" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.679102 4835 scope.go:117] "RemoveContainer" containerID="0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.692956 4835 scope.go:117] "RemoveContainer" containerID="4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.709195 4835 scope.go:117] "RemoveContainer" containerID="3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.735689 4835 scope.go:117] "RemoveContainer" containerID="a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.765582 4835 scope.go:117] "RemoveContainer" containerID="acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19" Feb 16 15:11:26 crc kubenswrapper[4835]: E0216 15:11:26.766007 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\": container with ID starting with acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19 not found: ID does not exist" containerID="acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.766047 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19"} err="failed to get container status \"acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\": rpc error: code = NotFound desc = could not find container \"acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19\": container with ID starting with acc5e6c0ad2e860400b446676e38e765276e0082219f591e6c50d82717efad19 not found: ID does not exist" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.766073 4835 scope.go:117] "RemoveContainer" containerID="b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50" Feb 16 15:11:26 crc kubenswrapper[4835]: E0216 15:11:26.766340 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\": container with ID starting with b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50 not found: ID does not exist" containerID="b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.766365 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50"} err="failed to get container status \"b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\": rpc error: code = NotFound desc = could not find container \"b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50\": container with ID starting with b195b369fff020cd7aaa892b265ef5a58cc9e6a43d5dec01f4b5bf771b1f1b50 not found: ID does not exist" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.766619 4835 scope.go:117] "RemoveContainer" containerID="0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8" Feb 16 15:11:26 crc kubenswrapper[4835]: E0216 15:11:26.766928 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\": container with ID starting with 0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8 not found: ID does not exist" containerID="0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.766954 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8"} err="failed to get container status \"0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\": rpc error: code = NotFound desc = could not find container \"0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8\": container with ID starting with 0606ec14a27485288d1d70340a53782eceaf9635f70695795c625938d36ea0c8 not found: ID does not exist" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.766971 4835 scope.go:117] "RemoveContainer" containerID="4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76" Feb 16 15:11:26 crc kubenswrapper[4835]: E0216 15:11:26.767395 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\": container with ID starting with 4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76 not found: ID does not exist" containerID="4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.767419 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76"} err="failed to get container status \"4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\": rpc error: code = NotFound desc = could not find container \"4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76\": container with ID starting with 4e46cb39f9f3052dc831f604333a656a0a71af347ed2aeda3328e5907b1b7f76 not found: ID does not exist" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.767437 4835 scope.go:117] "RemoveContainer" containerID="3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d" Feb 16 15:11:26 crc kubenswrapper[4835]: E0216 15:11:26.768198 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\": container with ID starting with 3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d not found: ID does not exist" containerID="3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.768260 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d"} err="failed to get container status \"3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\": rpc error: code = NotFound desc = could not find container \"3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d\": container with ID starting with 3c260cc3c89ff525b126244e3b63dd96d38b791ef3fda39c9bf2aceef63cef9d not found: ID does not exist" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.768281 4835 scope.go:117] "RemoveContainer" containerID="a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540" Feb 16 15:11:26 crc kubenswrapper[4835]: E0216 15:11:26.768687 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\": container with ID starting with a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540 not found: ID does not exist" containerID="a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540" Feb 16 15:11:26 crc kubenswrapper[4835]: I0216 15:11:26.768709 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540"} err="failed to get container status \"a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\": rpc error: code = NotFound desc = could not find container \"a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540\": container with ID starting with a4c9acfce4ecdb6778cf71556ccd970bc424aed97987192206425cb079d41540 not found: ID does not exist" Feb 16 15:11:27 crc kubenswrapper[4835]: I0216 15:11:27.386442 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 16 15:11:31 crc kubenswrapper[4835]: I0216 15:11:31.380612 4835 status_manager.go:851] "Failed to get status for pod" podUID="4699a27a-8190-4caf-bf07-ff741058b280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.175:6443: connect: connection refused" Feb 16 15:11:32 crc kubenswrapper[4835]: E0216 15:11:32.119062 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" Feb 16 15:11:32 crc kubenswrapper[4835]: E0216 15:11:32.119505 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" Feb 16 15:11:32 crc kubenswrapper[4835]: E0216 15:11:32.119991 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" Feb 16 15:11:32 crc kubenswrapper[4835]: E0216 15:11:32.120408 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" Feb 16 15:11:32 crc kubenswrapper[4835]: E0216 15:11:32.120904 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" Feb 16 15:11:32 crc kubenswrapper[4835]: I0216 15:11:32.120950 4835 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 15:11:32 crc kubenswrapper[4835]: E0216 15:11:32.121364 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" interval="200ms" Feb 16 15:11:32 crc kubenswrapper[4835]: E0216 15:11:32.322031 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" interval="400ms" Feb 16 15:11:32 crc kubenswrapper[4835]: E0216 15:11:32.723399 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" interval="800ms" Feb 16 15:11:33 crc kubenswrapper[4835]: E0216 15:11:33.524070 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" interval="1.6s" Feb 16 15:11:35 crc kubenswrapper[4835]: E0216 15:11:35.126110 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.175:6443: connect: connection refused" interval="3.2s" Feb 16 15:11:35 crc kubenswrapper[4835]: I0216 15:11:35.378744 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:35 crc kubenswrapper[4835]: I0216 15:11:35.379757 4835 status_manager.go:851] "Failed to get status for pod" podUID="4699a27a-8190-4caf-bf07-ff741058b280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.175:6443: connect: connection refused" Feb 16 15:11:35 crc kubenswrapper[4835]: I0216 15:11:35.394189 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="feb29cf6-38e6-43a9-a310-c19f6315f407" Feb 16 15:11:35 crc kubenswrapper[4835]: I0216 15:11:35.394220 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="feb29cf6-38e6-43a9-a310-c19f6315f407" Feb 16 15:11:35 crc kubenswrapper[4835]: E0216 15:11:35.394796 4835 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.175:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:35 crc kubenswrapper[4835]: I0216 15:11:35.395647 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:35 crc kubenswrapper[4835]: W0216 15:11:35.418178 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-4e8632769a72ad5e46e9e319f6847e8863ea81c3f6e69e0a7bcd51b4af505ec7 WatchSource:0}: Error finding container 4e8632769a72ad5e46e9e319f6847e8863ea81c3f6e69e0a7bcd51b4af505ec7: Status 404 returned error can't find the container with id 4e8632769a72ad5e46e9e319f6847e8863ea81c3f6e69e0a7bcd51b4af505ec7 Feb 16 15:11:35 crc kubenswrapper[4835]: E0216 15:11:35.578309 4835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.175:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894c2c02751679d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 15:11:24.012603293 +0000 UTC m=+233.304596198,LastTimestamp:2026-02-16 15:11:24.012603293 +0000 UTC m=+233.304596198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 15:11:35 crc kubenswrapper[4835]: I0216 15:11:35.689065 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"545f8d6649ff41058bbefa39394104fffaa529b955f9650508b41ae2b419f11e"} Feb 16 15:11:35 crc kubenswrapper[4835]: I0216 15:11:35.689119 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4e8632769a72ad5e46e9e319f6847e8863ea81c3f6e69e0a7bcd51b4af505ec7"} Feb 16 15:11:35 crc kubenswrapper[4835]: I0216 15:11:35.689392 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="feb29cf6-38e6-43a9-a310-c19f6315f407" Feb 16 15:11:35 crc kubenswrapper[4835]: I0216 15:11:35.689423 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="feb29cf6-38e6-43a9-a310-c19f6315f407" Feb 16 15:11:35 crc kubenswrapper[4835]: E0216 15:11:35.689744 4835 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.175:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:35 crc kubenswrapper[4835]: I0216 15:11:35.689815 4835 status_manager.go:851] "Failed to get status for pod" podUID="4699a27a-8190-4caf-bf07-ff741058b280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.175:6443: connect: connection refused" Feb 16 15:11:36 crc kubenswrapper[4835]: I0216 15:11:36.706322 4835 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="545f8d6649ff41058bbefa39394104fffaa529b955f9650508b41ae2b419f11e" exitCode=0 Feb 16 15:11:36 crc kubenswrapper[4835]: I0216 15:11:36.706375 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"545f8d6649ff41058bbefa39394104fffaa529b955f9650508b41ae2b419f11e"} Feb 16 15:11:36 crc kubenswrapper[4835]: I0216 15:11:36.706881 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a278e75baeeee8101ec3480cf1e94e7a0403393404ed4f2ee110224cc495931f"} Feb 16 15:11:36 crc kubenswrapper[4835]: I0216 15:11:36.706895 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"273bf505d20b593a6516ad567768356db02fe45f4653ede899967568677cc1c5"} Feb 16 15:11:36 crc kubenswrapper[4835]: I0216 15:11:36.706906 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"004547aab1cb12184838a0ba88b7aab55be6b65cf97757d9f3b2fa21171e62ba"} Feb 16 15:11:36 crc kubenswrapper[4835]: I0216 15:11:36.706916 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a8de8c66237e032c50aa094616f88e23a13767ad6f9ccd876a1d4296bf5163cd"} Feb 16 15:11:37 crc kubenswrapper[4835]: I0216 15:11:37.713739 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a9aa6d81c9e362d0615725cbd7a59caf6cdfcbaa83c9d1b92a0575fdeed20031"} Feb 16 15:11:37 crc kubenswrapper[4835]: I0216 15:11:37.713928 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:37 crc kubenswrapper[4835]: I0216 15:11:37.714017 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="feb29cf6-38e6-43a9-a310-c19f6315f407" Feb 16 15:11:37 crc kubenswrapper[4835]: I0216 15:11:37.714045 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="feb29cf6-38e6-43a9-a310-c19f6315f407" Feb 16 15:11:38 crc kubenswrapper[4835]: I0216 15:11:38.720554 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 15:11:38 crc kubenswrapper[4835]: I0216 15:11:38.720599 4835 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029" exitCode=1 Feb 16 15:11:38 crc kubenswrapper[4835]: I0216 15:11:38.720624 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029"} Feb 16 15:11:38 crc kubenswrapper[4835]: I0216 15:11:38.721027 4835 scope.go:117] "RemoveContainer" containerID="76060edce4bce6a2647b6606a720f692b8e6181b08d6b5e8754e2482f7cc3029" Feb 16 15:11:39 crc kubenswrapper[4835]: I0216 15:11:39.728604 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 15:11:39 crc kubenswrapper[4835]: I0216 15:11:39.729120 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1aeccfb738dd2f06a5dd2233c6a506a99998780283cd0e40d9a51047ad65ed1c"} Feb 16 15:11:40 crc kubenswrapper[4835]: I0216 15:11:40.396855 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:40 crc kubenswrapper[4835]: I0216 15:11:40.396942 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:40 crc kubenswrapper[4835]: I0216 15:11:40.407669 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:42 crc kubenswrapper[4835]: I0216 15:11:42.504083 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:11:42 crc kubenswrapper[4835]: I0216 15:11:42.600878 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:11:42 crc kubenswrapper[4835]: I0216 15:11:42.605746 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:11:42 crc kubenswrapper[4835]: I0216 15:11:42.724123 4835 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:42 crc kubenswrapper[4835]: I0216 15:11:42.742826 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="feb29cf6-38e6-43a9-a310-c19f6315f407" Feb 16 15:11:42 crc kubenswrapper[4835]: I0216 15:11:42.742865 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="feb29cf6-38e6-43a9-a310-c19f6315f407" Feb 16 15:11:42 crc kubenswrapper[4835]: I0216 15:11:42.751721 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:42 crc kubenswrapper[4835]: I0216 15:11:42.754650 4835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a4b5b913-073c-4b1c-b6fb-84700d04a242" Feb 16 15:11:43 crc kubenswrapper[4835]: I0216 15:11:43.747382 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="feb29cf6-38e6-43a9-a310-c19f6315f407" Feb 16 15:11:43 crc kubenswrapper[4835]: I0216 15:11:43.747427 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="feb29cf6-38e6-43a9-a310-c19f6315f407" Feb 16 15:11:43 crc kubenswrapper[4835]: I0216 15:11:43.751297 4835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a4b5b913-073c-4b1c-b6fb-84700d04a242" Feb 16 15:11:49 crc kubenswrapper[4835]: I0216 15:11:49.961824 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 15:11:51 crc kubenswrapper[4835]: I0216 15:11:51.376495 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 15:11:51 crc kubenswrapper[4835]: I0216 15:11:51.940252 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 15:11:52 crc kubenswrapper[4835]: I0216 15:11:52.511487 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 15:11:52 crc kubenswrapper[4835]: I0216 15:11:52.837370 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 15:11:53 crc kubenswrapper[4835]: I0216 15:11:53.325740 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 15:11:53 crc kubenswrapper[4835]: I0216 15:11:53.343951 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 15:11:53 crc kubenswrapper[4835]: I0216 15:11:53.501578 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 15:11:53 crc kubenswrapper[4835]: I0216 15:11:53.679366 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 15:11:54 crc kubenswrapper[4835]: I0216 15:11:54.598628 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 15:11:55 crc kubenswrapper[4835]: I0216 15:11:55.003605 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 15:11:55 crc kubenswrapper[4835]: I0216 15:11:55.143937 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 15:11:55 crc kubenswrapper[4835]: I0216 15:11:55.235075 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 15:11:55 crc kubenswrapper[4835]: I0216 15:11:55.312125 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 15:11:55 crc kubenswrapper[4835]: I0216 15:11:55.363521 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 15:11:55 crc kubenswrapper[4835]: I0216 15:11:55.771112 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 15:11:55 crc kubenswrapper[4835]: I0216 15:11:55.831375 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 15:11:55 crc kubenswrapper[4835]: I0216 15:11:55.991044 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 15:11:56 crc kubenswrapper[4835]: I0216 15:11:56.200681 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 15:11:56 crc kubenswrapper[4835]: I0216 15:11:56.274561 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 15:11:56 crc kubenswrapper[4835]: I0216 15:11:56.316209 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 15:11:56 crc kubenswrapper[4835]: I0216 15:11:56.356173 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 15:11:56 crc kubenswrapper[4835]: I0216 15:11:56.456898 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 15:11:56 crc kubenswrapper[4835]: I0216 15:11:56.593032 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 15:11:56 crc kubenswrapper[4835]: I0216 15:11:56.599556 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 15:11:56 crc kubenswrapper[4835]: I0216 15:11:56.822631 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 15:11:56 crc kubenswrapper[4835]: I0216 15:11:56.900587 4835 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 15:11:56 crc kubenswrapper[4835]: I0216 15:11:56.905087 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 15:11:56 crc kubenswrapper[4835]: I0216 15:11:56.905143 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 15:11:56 crc kubenswrapper[4835]: I0216 15:11:56.915623 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 15:11:56 crc kubenswrapper[4835]: I0216 15:11:56.953426 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.953403295 podStartE2EDuration="14.953403295s" podCreationTimestamp="2026-02-16 15:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:11:56.951507372 +0000 UTC m=+266.243500267" watchObservedRunningTime="2026-02-16 15:11:56.953403295 +0000 UTC m=+266.245396190" Feb 16 15:11:57 crc kubenswrapper[4835]: I0216 15:11:57.111418 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 15:11:57 crc kubenswrapper[4835]: I0216 15:11:57.301706 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 15:11:57 crc kubenswrapper[4835]: I0216 15:11:57.470560 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 15:11:57 crc kubenswrapper[4835]: I0216 15:11:57.495812 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 15:11:57 crc kubenswrapper[4835]: I0216 15:11:57.510462 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 15:11:57 crc kubenswrapper[4835]: I0216 15:11:57.605732 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 15:11:57 crc kubenswrapper[4835]: I0216 15:11:57.667847 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 15:11:57 crc kubenswrapper[4835]: I0216 15:11:57.707858 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 15:11:57 crc kubenswrapper[4835]: I0216 15:11:57.831759 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 15:11:57 crc kubenswrapper[4835]: I0216 15:11:57.859543 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 15:11:57 crc kubenswrapper[4835]: I0216 15:11:57.937610 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 15:11:58 crc kubenswrapper[4835]: I0216 15:11:58.040929 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 15:11:58 crc kubenswrapper[4835]: I0216 15:11:58.101192 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 15:11:58 crc kubenswrapper[4835]: I0216 15:11:58.211623 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 15:11:58 crc kubenswrapper[4835]: I0216 15:11:58.244592 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 15:11:58 crc kubenswrapper[4835]: I0216 15:11:58.287565 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 15:11:58 crc kubenswrapper[4835]: I0216 15:11:58.477071 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 15:11:58 crc kubenswrapper[4835]: I0216 15:11:58.495700 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 15:11:58 crc kubenswrapper[4835]: I0216 15:11:58.564835 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 15:11:58 crc kubenswrapper[4835]: I0216 15:11:58.644502 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 15:11:58 crc kubenswrapper[4835]: I0216 15:11:58.748920 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 15:11:58 crc kubenswrapper[4835]: I0216 15:11:58.754966 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 15:11:58 crc kubenswrapper[4835]: I0216 15:11:58.781682 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.073396 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.075334 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.280624 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.321181 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.418756 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.513826 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.519314 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.628175 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.692751 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.758752 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.760203 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.777688 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.811344 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.812115 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.857842 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.886381 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.908101 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.942741 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.979832 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 15:11:59 crc kubenswrapper[4835]: I0216 15:11:59.991938 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.134877 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.172626 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.192636 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.194050 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.225065 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.285881 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.405252 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.417249 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.437586 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.556106 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.597494 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.667004 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.851070 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.891770 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.905299 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.943071 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.958751 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.992338 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 15:12:00 crc kubenswrapper[4835]: I0216 15:12:00.993941 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 15:12:01 crc kubenswrapper[4835]: I0216 15:12:01.164558 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 15:12:01 crc kubenswrapper[4835]: I0216 15:12:01.169040 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 15:12:01 crc kubenswrapper[4835]: I0216 15:12:01.200711 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 15:12:01 crc kubenswrapper[4835]: I0216 15:12:01.236772 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 15:12:01 crc kubenswrapper[4835]: I0216 15:12:01.244565 4835 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 15:12:01 crc kubenswrapper[4835]: I0216 15:12:01.290520 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 15:12:01 crc kubenswrapper[4835]: I0216 15:12:01.384215 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 15:12:01 crc kubenswrapper[4835]: I0216 15:12:01.415484 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 15:12:01 crc kubenswrapper[4835]: I0216 15:12:01.462345 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 15:12:01 crc kubenswrapper[4835]: I0216 15:12:01.466323 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 15:12:01 crc kubenswrapper[4835]: I0216 15:12:01.474901 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 15:12:01 crc kubenswrapper[4835]: I0216 15:12:01.553881 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 15:12:01 crc kubenswrapper[4835]: I0216 15:12:01.802612 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 15:12:01 crc kubenswrapper[4835]: I0216 15:12:01.833966 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 15:12:01 crc kubenswrapper[4835]: I0216 15:12:01.856645 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 15:12:01 crc kubenswrapper[4835]: I0216 15:12:01.859984 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.065401 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.081475 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.196115 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.203688 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.204419 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.253595 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.290981 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.344587 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.344655 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.351158 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.364006 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.573005 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.588129 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.646937 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.710816 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.723811 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.725396 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.727947 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.736868 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.737812 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.760349 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.793409 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.802994 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.826333 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.910229 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 15:12:02 crc kubenswrapper[4835]: I0216 15:12:02.990708 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.133337 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.138702 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.205359 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.229486 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.359207 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.646127 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.649161 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.655827 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.657807 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.661483 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.790075 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.839429 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.864641 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.909122 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.914439 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.940032 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.966652 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 15:12:03 crc kubenswrapper[4835]: I0216 15:12:03.984030 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.039705 4835 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.040174 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.080363 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.136099 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.203722 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.203994 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.205938 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.231889 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.257841 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.262252 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.339886 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.392419 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.491956 4835 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.520495 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.579691 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.642154 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.741611 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.748240 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.794800 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.901219 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 15:12:04 crc kubenswrapper[4835]: I0216 15:12:04.909410 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.016340 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.032614 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.040728 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.088389 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.090101 4835 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.159706 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.172820 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.177598 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.178233 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.244347 4835 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.245808 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://657fa5ad1d41cfc02a74f3a3bdef1fb7235f39bc7ce1eba5e0460bdac99bb7ee" gracePeriod=5 Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.272722 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.275335 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.288485 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.322090 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.382887 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.423037 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.482815 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.524350 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.526344 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.653270 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.708491 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.744662 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.814283 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.967730 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 15:12:05 crc kubenswrapper[4835]: I0216 15:12:05.974149 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.001051 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.052036 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.105761 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.111534 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.158473 4835 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.213234 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.221847 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.271069 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.322322 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.368905 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.466506 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.482320 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.498883 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.550645 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.551341 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.722449 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.911508 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.950048 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 15:12:06 crc kubenswrapper[4835]: I0216 15:12:06.998121 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 15:12:07 crc kubenswrapper[4835]: I0216 15:12:07.031922 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 15:12:07 crc kubenswrapper[4835]: I0216 15:12:07.055278 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 15:12:07 crc kubenswrapper[4835]: I0216 15:12:07.148705 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 15:12:07 crc kubenswrapper[4835]: I0216 15:12:07.186484 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 15:12:07 crc kubenswrapper[4835]: I0216 15:12:07.193877 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 15:12:07 crc kubenswrapper[4835]: I0216 15:12:07.270176 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 15:12:07 crc kubenswrapper[4835]: I0216 15:12:07.440683 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 15:12:07 crc kubenswrapper[4835]: I0216 15:12:07.452834 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 15:12:07 crc kubenswrapper[4835]: I0216 15:12:07.572080 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 15:12:07 crc kubenswrapper[4835]: I0216 15:12:07.653185 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 15:12:07 crc kubenswrapper[4835]: I0216 15:12:07.759793 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 15:12:07 crc kubenswrapper[4835]: I0216 15:12:07.963422 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 15:12:08 crc kubenswrapper[4835]: I0216 15:12:08.035780 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 15:12:08 crc kubenswrapper[4835]: I0216 15:12:08.056458 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 15:12:08 crc kubenswrapper[4835]: I0216 15:12:08.113383 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 15:12:08 crc kubenswrapper[4835]: I0216 15:12:08.228588 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 15:12:08 crc kubenswrapper[4835]: I0216 15:12:08.245215 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 15:12:08 crc kubenswrapper[4835]: I0216 15:12:08.270208 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 15:12:08 crc kubenswrapper[4835]: I0216 15:12:08.396260 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 15:12:08 crc kubenswrapper[4835]: I0216 15:12:08.552855 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 15:12:08 crc kubenswrapper[4835]: I0216 15:12:08.697896 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 15:12:08 crc kubenswrapper[4835]: I0216 15:12:08.954412 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 15:12:08 crc kubenswrapper[4835]: I0216 15:12:08.986791 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 15:12:09 crc kubenswrapper[4835]: I0216 15:12:09.146254 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 15:12:09 crc kubenswrapper[4835]: I0216 15:12:09.303573 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 15:12:09 crc kubenswrapper[4835]: I0216 15:12:09.335417 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 15:12:09 crc kubenswrapper[4835]: I0216 15:12:09.511056 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 15:12:09 crc kubenswrapper[4835]: I0216 15:12:09.610497 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.009791 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.021014 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.077136 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.285260 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.467056 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.601108 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.807408 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.807471 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.894187 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.894274 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.894329 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.894357 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.894399 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.894731 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.894726 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.894780 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.894895 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.903955 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.904006 4835 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="657fa5ad1d41cfc02a74f3a3bdef1fb7235f39bc7ce1eba5e0460bdac99bb7ee" exitCode=137 Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.904053 4835 scope.go:117] "RemoveContainer" containerID="657fa5ad1d41cfc02a74f3a3bdef1fb7235f39bc7ce1eba5e0460bdac99bb7ee" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.904074 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.905989 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.936848 4835 scope.go:117] "RemoveContainer" containerID="657fa5ad1d41cfc02a74f3a3bdef1fb7235f39bc7ce1eba5e0460bdac99bb7ee" Feb 16 15:12:10 crc kubenswrapper[4835]: E0216 15:12:10.937194 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"657fa5ad1d41cfc02a74f3a3bdef1fb7235f39bc7ce1eba5e0460bdac99bb7ee\": container with ID starting with 657fa5ad1d41cfc02a74f3a3bdef1fb7235f39bc7ce1eba5e0460bdac99bb7ee not found: ID does not exist" containerID="657fa5ad1d41cfc02a74f3a3bdef1fb7235f39bc7ce1eba5e0460bdac99bb7ee" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.937235 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"657fa5ad1d41cfc02a74f3a3bdef1fb7235f39bc7ce1eba5e0460bdac99bb7ee"} err="failed to get container status \"657fa5ad1d41cfc02a74f3a3bdef1fb7235f39bc7ce1eba5e0460bdac99bb7ee\": rpc error: code = NotFound desc = could not find container \"657fa5ad1d41cfc02a74f3a3bdef1fb7235f39bc7ce1eba5e0460bdac99bb7ee\": container with ID starting with 657fa5ad1d41cfc02a74f3a3bdef1fb7235f39bc7ce1eba5e0460bdac99bb7ee not found: ID does not exist" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.995442 4835 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.995482 4835 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.995492 4835 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.995500 4835 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:10 crc kubenswrapper[4835]: I0216 15:12:10.995510 4835 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:11 crc kubenswrapper[4835]: I0216 15:12:11.386381 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 16 15:12:11 crc kubenswrapper[4835]: I0216 15:12:11.464352 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 15:12:11 crc kubenswrapper[4835]: I0216 15:12:11.681855 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 15:12:11 crc kubenswrapper[4835]: I0216 15:12:11.704122 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 15:12:11 crc kubenswrapper[4835]: I0216 15:12:11.822853 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 15:12:12 crc kubenswrapper[4835]: I0216 15:12:12.438722 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 15:12:12 crc kubenswrapper[4835]: I0216 15:12:12.755066 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.140546 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wm95t"] Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.141368 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wm95t" podUID="e1fe6dd1-829b-4120-8585-040e9032f292" containerName="registry-server" containerID="cri-o://36e05c6d84696ead0dc55c5d450f4a25aa42a417b444db2a4d13d37332423c63" gracePeriod=30 Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.145000 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-98n7v"] Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.145256 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-98n7v" podUID="61564e44-b4e6-4a57-9232-3403b0173aa6" containerName="registry-server" containerID="cri-o://500031298a724911eb905f65604150be3b60a7160f4b7fc31a41293537d6089c" gracePeriod=30 Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.157884 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gqskc"] Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.158108 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" podUID="c21c247b-8282-4ea0-aaac-cd2908a9cfac" containerName="marketplace-operator" containerID="cri-o://f46c23d6b5ff36a62ec6e9a31621759826f9dc9582f959c1f76c009fd68c405b" gracePeriod=30 Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.169562 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgqwv"] Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.169874 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qgqwv" podUID="0132c288-a83e-4f3c-b620-4cac59f56df9" containerName="registry-server" containerID="cri-o://60430094d241d4b56c1c00284c0e1bc7dfa0f0c60df42446bc5049be1eb80574" gracePeriod=30 Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.179290 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2blp8"] Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.179590 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2blp8" podUID="822c5e9d-78fa-4c80-b3f4-e3a0310020a2" containerName="registry-server" containerID="cri-o://fe6dcec79f6d412d8aab756a0db102bd4d13e9633133f4d50f974a9b89d5ddc4" gracePeriod=30 Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.202041 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l28fs"] Feb 16 15:12:19 crc kubenswrapper[4835]: E0216 15:12:19.202248 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.202259 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 15:12:19 crc kubenswrapper[4835]: E0216 15:12:19.202269 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4699a27a-8190-4caf-bf07-ff741058b280" containerName="installer" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.202279 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4699a27a-8190-4caf-bf07-ff741058b280" containerName="installer" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.202390 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.202408 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="4699a27a-8190-4caf-bf07-ff741058b280" containerName="installer" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.202788 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l28fs" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.212173 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l28fs"] Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.299889 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbc62417-6a2f-4620-acfa-7c2fac9e4c42-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l28fs\" (UID: \"bbc62417-6a2f-4620-acfa-7c2fac9e4c42\") " pod="openshift-marketplace/marketplace-operator-79b997595-l28fs" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.299945 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bbc62417-6a2f-4620-acfa-7c2fac9e4c42-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l28fs\" (UID: \"bbc62417-6a2f-4620-acfa-7c2fac9e4c42\") " pod="openshift-marketplace/marketplace-operator-79b997595-l28fs" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.300023 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4296\" (UniqueName: \"kubernetes.io/projected/bbc62417-6a2f-4620-acfa-7c2fac9e4c42-kube-api-access-b4296\") pod \"marketplace-operator-79b997595-l28fs\" (UID: \"bbc62417-6a2f-4620-acfa-7c2fac9e4c42\") " pod="openshift-marketplace/marketplace-operator-79b997595-l28fs" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.400691 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbc62417-6a2f-4620-acfa-7c2fac9e4c42-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l28fs\" (UID: \"bbc62417-6a2f-4620-acfa-7c2fac9e4c42\") " pod="openshift-marketplace/marketplace-operator-79b997595-l28fs" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.400744 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bbc62417-6a2f-4620-acfa-7c2fac9e4c42-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l28fs\" (UID: \"bbc62417-6a2f-4620-acfa-7c2fac9e4c42\") " pod="openshift-marketplace/marketplace-operator-79b997595-l28fs" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.400807 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4296\" (UniqueName: \"kubernetes.io/projected/bbc62417-6a2f-4620-acfa-7c2fac9e4c42-kube-api-access-b4296\") pod \"marketplace-operator-79b997595-l28fs\" (UID: \"bbc62417-6a2f-4620-acfa-7c2fac9e4c42\") " pod="openshift-marketplace/marketplace-operator-79b997595-l28fs" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.402877 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbc62417-6a2f-4620-acfa-7c2fac9e4c42-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-l28fs\" (UID: \"bbc62417-6a2f-4620-acfa-7c2fac9e4c42\") " pod="openshift-marketplace/marketplace-operator-79b997595-l28fs" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.410728 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bbc62417-6a2f-4620-acfa-7c2fac9e4c42-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-l28fs\" (UID: \"bbc62417-6a2f-4620-acfa-7c2fac9e4c42\") " pod="openshift-marketplace/marketplace-operator-79b997595-l28fs" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.419792 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4296\" (UniqueName: \"kubernetes.io/projected/bbc62417-6a2f-4620-acfa-7c2fac9e4c42-kube-api-access-b4296\") pod \"marketplace-operator-79b997595-l28fs\" (UID: \"bbc62417-6a2f-4620-acfa-7c2fac9e4c42\") " pod="openshift-marketplace/marketplace-operator-79b997595-l28fs" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.591261 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-l28fs" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.678483 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98n7v" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.706969 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61564e44-b4e6-4a57-9232-3403b0173aa6-catalog-content\") pod \"61564e44-b4e6-4a57-9232-3403b0173aa6\" (UID: \"61564e44-b4e6-4a57-9232-3403b0173aa6\") " Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.707038 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61564e44-b4e6-4a57-9232-3403b0173aa6-utilities\") pod \"61564e44-b4e6-4a57-9232-3403b0173aa6\" (UID: \"61564e44-b4e6-4a57-9232-3403b0173aa6\") " Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.707243 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtxr4\" (UniqueName: \"kubernetes.io/projected/61564e44-b4e6-4a57-9232-3403b0173aa6-kube-api-access-mtxr4\") pod \"61564e44-b4e6-4a57-9232-3403b0173aa6\" (UID: \"61564e44-b4e6-4a57-9232-3403b0173aa6\") " Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.713263 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61564e44-b4e6-4a57-9232-3403b0173aa6-utilities" (OuterVolumeSpecName: "utilities") pod "61564e44-b4e6-4a57-9232-3403b0173aa6" (UID: "61564e44-b4e6-4a57-9232-3403b0173aa6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.717149 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61564e44-b4e6-4a57-9232-3403b0173aa6-kube-api-access-mtxr4" (OuterVolumeSpecName: "kube-api-access-mtxr4") pod "61564e44-b4e6-4a57-9232-3403b0173aa6" (UID: "61564e44-b4e6-4a57-9232-3403b0173aa6"). InnerVolumeSpecName "kube-api-access-mtxr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.724478 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2blp8" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.776382 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wm95t" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.783513 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.802952 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgqwv" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.805196 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61564e44-b4e6-4a57-9232-3403b0173aa6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61564e44-b4e6-4a57-9232-3403b0173aa6" (UID: "61564e44-b4e6-4a57-9232-3403b0173aa6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.813341 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1fe6dd1-829b-4120-8585-040e9032f292-catalog-content\") pod \"e1fe6dd1-829b-4120-8585-040e9032f292\" (UID: \"e1fe6dd1-829b-4120-8585-040e9032f292\") " Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.813376 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxn2f\" (UniqueName: \"kubernetes.io/projected/e1fe6dd1-829b-4120-8585-040e9032f292-kube-api-access-cxn2f\") pod \"e1fe6dd1-829b-4120-8585-040e9032f292\" (UID: \"e1fe6dd1-829b-4120-8585-040e9032f292\") " Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.813415 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c21c247b-8282-4ea0-aaac-cd2908a9cfac-marketplace-operator-metrics\") pod \"c21c247b-8282-4ea0-aaac-cd2908a9cfac\" (UID: \"c21c247b-8282-4ea0-aaac-cd2908a9cfac\") " Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.813452 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fq7c\" (UniqueName: \"kubernetes.io/projected/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-kube-api-access-9fq7c\") pod \"822c5e9d-78fa-4c80-b3f4-e3a0310020a2\" (UID: \"822c5e9d-78fa-4c80-b3f4-e3a0310020a2\") " Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.813475 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1fe6dd1-829b-4120-8585-040e9032f292-utilities\") pod \"e1fe6dd1-829b-4120-8585-040e9032f292\" (UID: \"e1fe6dd1-829b-4120-8585-040e9032f292\") " Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.813530 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-catalog-content\") pod \"822c5e9d-78fa-4c80-b3f4-e3a0310020a2\" (UID: \"822c5e9d-78fa-4c80-b3f4-e3a0310020a2\") " Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.813578 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqhw7\" (UniqueName: \"kubernetes.io/projected/c21c247b-8282-4ea0-aaac-cd2908a9cfac-kube-api-access-lqhw7\") pod \"c21c247b-8282-4ea0-aaac-cd2908a9cfac\" (UID: \"c21c247b-8282-4ea0-aaac-cd2908a9cfac\") " Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.813621 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c21c247b-8282-4ea0-aaac-cd2908a9cfac-marketplace-trusted-ca\") pod \"c21c247b-8282-4ea0-aaac-cd2908a9cfac\" (UID: \"c21c247b-8282-4ea0-aaac-cd2908a9cfac\") " Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.813645 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-utilities\") pod \"822c5e9d-78fa-4c80-b3f4-e3a0310020a2\" (UID: \"822c5e9d-78fa-4c80-b3f4-e3a0310020a2\") " Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.813937 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtxr4\" (UniqueName: \"kubernetes.io/projected/61564e44-b4e6-4a57-9232-3403b0173aa6-kube-api-access-mtxr4\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.813951 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61564e44-b4e6-4a57-9232-3403b0173aa6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.813961 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61564e44-b4e6-4a57-9232-3403b0173aa6-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.814667 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-utilities" (OuterVolumeSpecName: "utilities") pod "822c5e9d-78fa-4c80-b3f4-e3a0310020a2" (UID: "822c5e9d-78fa-4c80-b3f4-e3a0310020a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.815665 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c21c247b-8282-4ea0-aaac-cd2908a9cfac-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c21c247b-8282-4ea0-aaac-cd2908a9cfac" (UID: "c21c247b-8282-4ea0-aaac-cd2908a9cfac"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.815811 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1fe6dd1-829b-4120-8585-040e9032f292-utilities" (OuterVolumeSpecName: "utilities") pod "e1fe6dd1-829b-4120-8585-040e9032f292" (UID: "e1fe6dd1-829b-4120-8585-040e9032f292"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.820308 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c21c247b-8282-4ea0-aaac-cd2908a9cfac-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c21c247b-8282-4ea0-aaac-cd2908a9cfac" (UID: "c21c247b-8282-4ea0-aaac-cd2908a9cfac"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.831598 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-kube-api-access-9fq7c" (OuterVolumeSpecName: "kube-api-access-9fq7c") pod "822c5e9d-78fa-4c80-b3f4-e3a0310020a2" (UID: "822c5e9d-78fa-4c80-b3f4-e3a0310020a2"). InnerVolumeSpecName "kube-api-access-9fq7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.831731 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1fe6dd1-829b-4120-8585-040e9032f292-kube-api-access-cxn2f" (OuterVolumeSpecName: "kube-api-access-cxn2f") pod "e1fe6dd1-829b-4120-8585-040e9032f292" (UID: "e1fe6dd1-829b-4120-8585-040e9032f292"). InnerVolumeSpecName "kube-api-access-cxn2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.835162 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21c247b-8282-4ea0-aaac-cd2908a9cfac-kube-api-access-lqhw7" (OuterVolumeSpecName: "kube-api-access-lqhw7") pod "c21c247b-8282-4ea0-aaac-cd2908a9cfac" (UID: "c21c247b-8282-4ea0-aaac-cd2908a9cfac"). InnerVolumeSpecName "kube-api-access-lqhw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.869782 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1fe6dd1-829b-4120-8585-040e9032f292-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1fe6dd1-829b-4120-8585-040e9032f292" (UID: "e1fe6dd1-829b-4120-8585-040e9032f292"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.916002 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0132c288-a83e-4f3c-b620-4cac59f56df9-utilities\") pod \"0132c288-a83e-4f3c-b620-4cac59f56df9\" (UID: \"0132c288-a83e-4f3c-b620-4cac59f56df9\") " Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.916087 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0132c288-a83e-4f3c-b620-4cac59f56df9-catalog-content\") pod \"0132c288-a83e-4f3c-b620-4cac59f56df9\" (UID: \"0132c288-a83e-4f3c-b620-4cac59f56df9\") " Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.916118 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rd4f\" (UniqueName: \"kubernetes.io/projected/0132c288-a83e-4f3c-b620-4cac59f56df9-kube-api-access-8rd4f\") pod \"0132c288-a83e-4f3c-b620-4cac59f56df9\" (UID: \"0132c288-a83e-4f3c-b620-4cac59f56df9\") " Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.916298 4835 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c21c247b-8282-4ea0-aaac-cd2908a9cfac-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.916310 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fq7c\" (UniqueName: \"kubernetes.io/projected/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-kube-api-access-9fq7c\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.916319 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1fe6dd1-829b-4120-8585-040e9032f292-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.916326 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqhw7\" (UniqueName: \"kubernetes.io/projected/c21c247b-8282-4ea0-aaac-cd2908a9cfac-kube-api-access-lqhw7\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.916336 4835 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c21c247b-8282-4ea0-aaac-cd2908a9cfac-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.916343 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.916351 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1fe6dd1-829b-4120-8585-040e9032f292-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.916359 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxn2f\" (UniqueName: \"kubernetes.io/projected/e1fe6dd1-829b-4120-8585-040e9032f292-kube-api-access-cxn2f\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.918872 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0132c288-a83e-4f3c-b620-4cac59f56df9-utilities" (OuterVolumeSpecName: "utilities") pod "0132c288-a83e-4f3c-b620-4cac59f56df9" (UID: "0132c288-a83e-4f3c-b620-4cac59f56df9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.919647 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0132c288-a83e-4f3c-b620-4cac59f56df9-kube-api-access-8rd4f" (OuterVolumeSpecName: "kube-api-access-8rd4f") pod "0132c288-a83e-4f3c-b620-4cac59f56df9" (UID: "0132c288-a83e-4f3c-b620-4cac59f56df9"). InnerVolumeSpecName "kube-api-access-8rd4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.939683 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "822c5e9d-78fa-4c80-b3f4-e3a0310020a2" (UID: "822c5e9d-78fa-4c80-b3f4-e3a0310020a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.988759 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0132c288-a83e-4f3c-b620-4cac59f56df9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0132c288-a83e-4f3c-b620-4cac59f56df9" (UID: "0132c288-a83e-4f3c-b620-4cac59f56df9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.989757 4835 generic.go:334] "Generic (PLEG): container finished" podID="61564e44-b4e6-4a57-9232-3403b0173aa6" containerID="500031298a724911eb905f65604150be3b60a7160f4b7fc31a41293537d6089c" exitCode=0 Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.989843 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98n7v" event={"ID":"61564e44-b4e6-4a57-9232-3403b0173aa6","Type":"ContainerDied","Data":"500031298a724911eb905f65604150be3b60a7160f4b7fc31a41293537d6089c"} Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.989870 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98n7v" event={"ID":"61564e44-b4e6-4a57-9232-3403b0173aa6","Type":"ContainerDied","Data":"ed28996c378ff3d6bb968d3e58f0e8b7c57113b715a9274a102d0caae7714c8f"} Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.989887 4835 scope.go:117] "RemoveContainer" containerID="500031298a724911eb905f65604150be3b60a7160f4b7fc31a41293537d6089c" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.990017 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98n7v" Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.996504 4835 generic.go:334] "Generic (PLEG): container finished" podID="822c5e9d-78fa-4c80-b3f4-e3a0310020a2" containerID="fe6dcec79f6d412d8aab756a0db102bd4d13e9633133f4d50f974a9b89d5ddc4" exitCode=0 Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.996577 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2blp8" event={"ID":"822c5e9d-78fa-4c80-b3f4-e3a0310020a2","Type":"ContainerDied","Data":"fe6dcec79f6d412d8aab756a0db102bd4d13e9633133f4d50f974a9b89d5ddc4"} Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.996599 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2blp8" event={"ID":"822c5e9d-78fa-4c80-b3f4-e3a0310020a2","Type":"ContainerDied","Data":"8bb4d6bde9f857761f4f8f566d493fd7a9f74d849e17664fa141174f74673979"} Feb 16 15:12:19 crc kubenswrapper[4835]: I0216 15:12:19.996672 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2blp8" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.001845 4835 generic.go:334] "Generic (PLEG): container finished" podID="0132c288-a83e-4f3c-b620-4cac59f56df9" containerID="60430094d241d4b56c1c00284c0e1bc7dfa0f0c60df42446bc5049be1eb80574" exitCode=0 Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.001917 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgqwv" event={"ID":"0132c288-a83e-4f3c-b620-4cac59f56df9","Type":"ContainerDied","Data":"60430094d241d4b56c1c00284c0e1bc7dfa0f0c60df42446bc5049be1eb80574"} Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.001946 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qgqwv" event={"ID":"0132c288-a83e-4f3c-b620-4cac59f56df9","Type":"ContainerDied","Data":"9efc25f9e97aeb12ef6403e8d5ac07be08da988e45b714bfd1e7d791a1297691"} Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.001971 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qgqwv" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.003305 4835 generic.go:334] "Generic (PLEG): container finished" podID="c21c247b-8282-4ea0-aaac-cd2908a9cfac" containerID="f46c23d6b5ff36a62ec6e9a31621759826f9dc9582f959c1f76c009fd68c405b" exitCode=0 Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.003420 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" event={"ID":"c21c247b-8282-4ea0-aaac-cd2908a9cfac","Type":"ContainerDied","Data":"f46c23d6b5ff36a62ec6e9a31621759826f9dc9582f959c1f76c009fd68c405b"} Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.003489 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" event={"ID":"c21c247b-8282-4ea0-aaac-cd2908a9cfac","Type":"ContainerDied","Data":"1b224f7ea2a14f6dc1de18b0ce8166a477d153de76bf31a8ba12e08204be3c1a"} Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.003628 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gqskc" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.005583 4835 generic.go:334] "Generic (PLEG): container finished" podID="e1fe6dd1-829b-4120-8585-040e9032f292" containerID="36e05c6d84696ead0dc55c5d450f4a25aa42a417b444db2a4d13d37332423c63" exitCode=0 Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.005623 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm95t" event={"ID":"e1fe6dd1-829b-4120-8585-040e9032f292","Type":"ContainerDied","Data":"36e05c6d84696ead0dc55c5d450f4a25aa42a417b444db2a4d13d37332423c63"} Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.005647 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wm95t" event={"ID":"e1fe6dd1-829b-4120-8585-040e9032f292","Type":"ContainerDied","Data":"f53b0a889a88061cd55eeb7fc92ef61e0fca6478f4565d2a15e7b56c90a79c7c"} Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.005675 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wm95t" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.012506 4835 scope.go:117] "RemoveContainer" containerID="e2af1963e6f81888dba9b7cfb57c32d9f650a6cb5ea40e9b82a2ae4f6a59c61a" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.016999 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0132c288-a83e-4f3c-b620-4cac59f56df9-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.017025 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0132c288-a83e-4f3c-b620-4cac59f56df9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.017037 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rd4f\" (UniqueName: \"kubernetes.io/projected/0132c288-a83e-4f3c-b620-4cac59f56df9-kube-api-access-8rd4f\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.017050 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822c5e9d-78fa-4c80-b3f4-e3a0310020a2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.028875 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-98n7v"] Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.031068 4835 scope.go:117] "RemoveContainer" containerID="711d745de04581f60124ca46b6433dc491770cb1bca48fa3f73dff3c436178e0" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.034354 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-98n7v"] Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.042350 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2blp8"] Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.050792 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2blp8"] Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.056912 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgqwv"] Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.060856 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qgqwv"] Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.061764 4835 scope.go:117] "RemoveContainer" containerID="500031298a724911eb905f65604150be3b60a7160f4b7fc31a41293537d6089c" Feb 16 15:12:20 crc kubenswrapper[4835]: E0216 15:12:20.062743 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"500031298a724911eb905f65604150be3b60a7160f4b7fc31a41293537d6089c\": container with ID starting with 500031298a724911eb905f65604150be3b60a7160f4b7fc31a41293537d6089c not found: ID does not exist" containerID="500031298a724911eb905f65604150be3b60a7160f4b7fc31a41293537d6089c" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.062943 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500031298a724911eb905f65604150be3b60a7160f4b7fc31a41293537d6089c"} err="failed to get container status \"500031298a724911eb905f65604150be3b60a7160f4b7fc31a41293537d6089c\": rpc error: code = NotFound desc = could not find container \"500031298a724911eb905f65604150be3b60a7160f4b7fc31a41293537d6089c\": container with ID starting with 500031298a724911eb905f65604150be3b60a7160f4b7fc31a41293537d6089c not found: ID does not exist" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.062983 4835 scope.go:117] "RemoveContainer" containerID="e2af1963e6f81888dba9b7cfb57c32d9f650a6cb5ea40e9b82a2ae4f6a59c61a" Feb 16 15:12:20 crc kubenswrapper[4835]: E0216 15:12:20.063382 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2af1963e6f81888dba9b7cfb57c32d9f650a6cb5ea40e9b82a2ae4f6a59c61a\": container with ID starting with e2af1963e6f81888dba9b7cfb57c32d9f650a6cb5ea40e9b82a2ae4f6a59c61a not found: ID does not exist" containerID="e2af1963e6f81888dba9b7cfb57c32d9f650a6cb5ea40e9b82a2ae4f6a59c61a" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.063411 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2af1963e6f81888dba9b7cfb57c32d9f650a6cb5ea40e9b82a2ae4f6a59c61a"} err="failed to get container status \"e2af1963e6f81888dba9b7cfb57c32d9f650a6cb5ea40e9b82a2ae4f6a59c61a\": rpc error: code = NotFound desc = could not find container \"e2af1963e6f81888dba9b7cfb57c32d9f650a6cb5ea40e9b82a2ae4f6a59c61a\": container with ID starting with e2af1963e6f81888dba9b7cfb57c32d9f650a6cb5ea40e9b82a2ae4f6a59c61a not found: ID does not exist" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.063437 4835 scope.go:117] "RemoveContainer" containerID="711d745de04581f60124ca46b6433dc491770cb1bca48fa3f73dff3c436178e0" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.066697 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wm95t"] Feb 16 15:12:20 crc kubenswrapper[4835]: E0216 15:12:20.069746 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"711d745de04581f60124ca46b6433dc491770cb1bca48fa3f73dff3c436178e0\": container with ID starting with 711d745de04581f60124ca46b6433dc491770cb1bca48fa3f73dff3c436178e0 not found: ID does not exist" containerID="711d745de04581f60124ca46b6433dc491770cb1bca48fa3f73dff3c436178e0" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.069816 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"711d745de04581f60124ca46b6433dc491770cb1bca48fa3f73dff3c436178e0"} err="failed to get container status \"711d745de04581f60124ca46b6433dc491770cb1bca48fa3f73dff3c436178e0\": rpc error: code = NotFound desc = could not find container \"711d745de04581f60124ca46b6433dc491770cb1bca48fa3f73dff3c436178e0\": container with ID starting with 711d745de04581f60124ca46b6433dc491770cb1bca48fa3f73dff3c436178e0 not found: ID does not exist" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.069845 4835 scope.go:117] "RemoveContainer" containerID="fe6dcec79f6d412d8aab756a0db102bd4d13e9633133f4d50f974a9b89d5ddc4" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.072760 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wm95t"] Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.077974 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gqskc"] Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.083584 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gqskc"] Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.086256 4835 scope.go:117] "RemoveContainer" containerID="74a68d64b67df0e1df61c394fc889da3990fe228bfd065c7be2152634bcdb11b" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.089057 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-l28fs"] Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.101816 4835 scope.go:117] "RemoveContainer" containerID="a2a8dcad2688c5857ebe47e6d8f5eb8b05918cf7f32ae94feffb26bce150526c" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.116067 4835 scope.go:117] "RemoveContainer" containerID="fe6dcec79f6d412d8aab756a0db102bd4d13e9633133f4d50f974a9b89d5ddc4" Feb 16 15:12:20 crc kubenswrapper[4835]: E0216 15:12:20.116589 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe6dcec79f6d412d8aab756a0db102bd4d13e9633133f4d50f974a9b89d5ddc4\": container with ID starting with fe6dcec79f6d412d8aab756a0db102bd4d13e9633133f4d50f974a9b89d5ddc4 not found: ID does not exist" containerID="fe6dcec79f6d412d8aab756a0db102bd4d13e9633133f4d50f974a9b89d5ddc4" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.116621 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe6dcec79f6d412d8aab756a0db102bd4d13e9633133f4d50f974a9b89d5ddc4"} err="failed to get container status \"fe6dcec79f6d412d8aab756a0db102bd4d13e9633133f4d50f974a9b89d5ddc4\": rpc error: code = NotFound desc = could not find container \"fe6dcec79f6d412d8aab756a0db102bd4d13e9633133f4d50f974a9b89d5ddc4\": container with ID starting with fe6dcec79f6d412d8aab756a0db102bd4d13e9633133f4d50f974a9b89d5ddc4 not found: ID does not exist" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.116642 4835 scope.go:117] "RemoveContainer" containerID="74a68d64b67df0e1df61c394fc889da3990fe228bfd065c7be2152634bcdb11b" Feb 16 15:12:20 crc kubenswrapper[4835]: E0216 15:12:20.117069 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74a68d64b67df0e1df61c394fc889da3990fe228bfd065c7be2152634bcdb11b\": container with ID starting with 74a68d64b67df0e1df61c394fc889da3990fe228bfd065c7be2152634bcdb11b not found: ID does not exist" containerID="74a68d64b67df0e1df61c394fc889da3990fe228bfd065c7be2152634bcdb11b" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.117091 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74a68d64b67df0e1df61c394fc889da3990fe228bfd065c7be2152634bcdb11b"} err="failed to get container status \"74a68d64b67df0e1df61c394fc889da3990fe228bfd065c7be2152634bcdb11b\": rpc error: code = NotFound desc = could not find container \"74a68d64b67df0e1df61c394fc889da3990fe228bfd065c7be2152634bcdb11b\": container with ID starting with 74a68d64b67df0e1df61c394fc889da3990fe228bfd065c7be2152634bcdb11b not found: ID does not exist" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.117104 4835 scope.go:117] "RemoveContainer" containerID="a2a8dcad2688c5857ebe47e6d8f5eb8b05918cf7f32ae94feffb26bce150526c" Feb 16 15:12:20 crc kubenswrapper[4835]: E0216 15:12:20.117385 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2a8dcad2688c5857ebe47e6d8f5eb8b05918cf7f32ae94feffb26bce150526c\": container with ID starting with a2a8dcad2688c5857ebe47e6d8f5eb8b05918cf7f32ae94feffb26bce150526c not found: ID does not exist" containerID="a2a8dcad2688c5857ebe47e6d8f5eb8b05918cf7f32ae94feffb26bce150526c" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.117406 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2a8dcad2688c5857ebe47e6d8f5eb8b05918cf7f32ae94feffb26bce150526c"} err="failed to get container status \"a2a8dcad2688c5857ebe47e6d8f5eb8b05918cf7f32ae94feffb26bce150526c\": rpc error: code = NotFound desc = could not find container \"a2a8dcad2688c5857ebe47e6d8f5eb8b05918cf7f32ae94feffb26bce150526c\": container with ID starting with a2a8dcad2688c5857ebe47e6d8f5eb8b05918cf7f32ae94feffb26bce150526c not found: ID does not exist" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.117422 4835 scope.go:117] "RemoveContainer" containerID="60430094d241d4b56c1c00284c0e1bc7dfa0f0c60df42446bc5049be1eb80574" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.129324 4835 scope.go:117] "RemoveContainer" containerID="5786357e55c9d23999cd87e8fdb48761c210b1de195282ab661bc1da7db1569f" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.150481 4835 scope.go:117] "RemoveContainer" containerID="389a2aafc088ff9397af6d681ef912e1d57ce364ce49cfc1be2520c7f0404433" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.170728 4835 scope.go:117] "RemoveContainer" containerID="60430094d241d4b56c1c00284c0e1bc7dfa0f0c60df42446bc5049be1eb80574" Feb 16 15:12:20 crc kubenswrapper[4835]: E0216 15:12:20.171185 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60430094d241d4b56c1c00284c0e1bc7dfa0f0c60df42446bc5049be1eb80574\": container with ID starting with 60430094d241d4b56c1c00284c0e1bc7dfa0f0c60df42446bc5049be1eb80574 not found: ID does not exist" containerID="60430094d241d4b56c1c00284c0e1bc7dfa0f0c60df42446bc5049be1eb80574" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.171235 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60430094d241d4b56c1c00284c0e1bc7dfa0f0c60df42446bc5049be1eb80574"} err="failed to get container status \"60430094d241d4b56c1c00284c0e1bc7dfa0f0c60df42446bc5049be1eb80574\": rpc error: code = NotFound desc = could not find container \"60430094d241d4b56c1c00284c0e1bc7dfa0f0c60df42446bc5049be1eb80574\": container with ID starting with 60430094d241d4b56c1c00284c0e1bc7dfa0f0c60df42446bc5049be1eb80574 not found: ID does not exist" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.171269 4835 scope.go:117] "RemoveContainer" containerID="5786357e55c9d23999cd87e8fdb48761c210b1de195282ab661bc1da7db1569f" Feb 16 15:12:20 crc kubenswrapper[4835]: E0216 15:12:20.171772 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5786357e55c9d23999cd87e8fdb48761c210b1de195282ab661bc1da7db1569f\": container with ID starting with 5786357e55c9d23999cd87e8fdb48761c210b1de195282ab661bc1da7db1569f not found: ID does not exist" containerID="5786357e55c9d23999cd87e8fdb48761c210b1de195282ab661bc1da7db1569f" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.171845 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5786357e55c9d23999cd87e8fdb48761c210b1de195282ab661bc1da7db1569f"} err="failed to get container status \"5786357e55c9d23999cd87e8fdb48761c210b1de195282ab661bc1da7db1569f\": rpc error: code = NotFound desc = could not find container \"5786357e55c9d23999cd87e8fdb48761c210b1de195282ab661bc1da7db1569f\": container with ID starting with 5786357e55c9d23999cd87e8fdb48761c210b1de195282ab661bc1da7db1569f not found: ID does not exist" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.171867 4835 scope.go:117] "RemoveContainer" containerID="389a2aafc088ff9397af6d681ef912e1d57ce364ce49cfc1be2520c7f0404433" Feb 16 15:12:20 crc kubenswrapper[4835]: E0216 15:12:20.172304 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"389a2aafc088ff9397af6d681ef912e1d57ce364ce49cfc1be2520c7f0404433\": container with ID starting with 389a2aafc088ff9397af6d681ef912e1d57ce364ce49cfc1be2520c7f0404433 not found: ID does not exist" containerID="389a2aafc088ff9397af6d681ef912e1d57ce364ce49cfc1be2520c7f0404433" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.172350 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"389a2aafc088ff9397af6d681ef912e1d57ce364ce49cfc1be2520c7f0404433"} err="failed to get container status \"389a2aafc088ff9397af6d681ef912e1d57ce364ce49cfc1be2520c7f0404433\": rpc error: code = NotFound desc = could not find container \"389a2aafc088ff9397af6d681ef912e1d57ce364ce49cfc1be2520c7f0404433\": container with ID starting with 389a2aafc088ff9397af6d681ef912e1d57ce364ce49cfc1be2520c7f0404433 not found: ID does not exist" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.172367 4835 scope.go:117] "RemoveContainer" containerID="f46c23d6b5ff36a62ec6e9a31621759826f9dc9582f959c1f76c009fd68c405b" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.186227 4835 scope.go:117] "RemoveContainer" containerID="f46c23d6b5ff36a62ec6e9a31621759826f9dc9582f959c1f76c009fd68c405b" Feb 16 15:12:20 crc kubenswrapper[4835]: E0216 15:12:20.188835 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f46c23d6b5ff36a62ec6e9a31621759826f9dc9582f959c1f76c009fd68c405b\": container with ID starting with f46c23d6b5ff36a62ec6e9a31621759826f9dc9582f959c1f76c009fd68c405b not found: ID does not exist" containerID="f46c23d6b5ff36a62ec6e9a31621759826f9dc9582f959c1f76c009fd68c405b" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.188870 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f46c23d6b5ff36a62ec6e9a31621759826f9dc9582f959c1f76c009fd68c405b"} err="failed to get container status \"f46c23d6b5ff36a62ec6e9a31621759826f9dc9582f959c1f76c009fd68c405b\": rpc error: code = NotFound desc = could not find container \"f46c23d6b5ff36a62ec6e9a31621759826f9dc9582f959c1f76c009fd68c405b\": container with ID starting with f46c23d6b5ff36a62ec6e9a31621759826f9dc9582f959c1f76c009fd68c405b not found: ID does not exist" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.188902 4835 scope.go:117] "RemoveContainer" containerID="36e05c6d84696ead0dc55c5d450f4a25aa42a417b444db2a4d13d37332423c63" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.204649 4835 scope.go:117] "RemoveContainer" containerID="db0f7e65bc107fff87939f1caa32038ad18c78686fcc1e19a2f96f28e5ebde92" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.219780 4835 scope.go:117] "RemoveContainer" containerID="76f43528ce999acd704dc93a6da3489af0bfe0c6a0cf6dacbbe43e6b282895a7" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.236984 4835 scope.go:117] "RemoveContainer" containerID="36e05c6d84696ead0dc55c5d450f4a25aa42a417b444db2a4d13d37332423c63" Feb 16 15:12:20 crc kubenswrapper[4835]: E0216 15:12:20.237745 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36e05c6d84696ead0dc55c5d450f4a25aa42a417b444db2a4d13d37332423c63\": container with ID starting with 36e05c6d84696ead0dc55c5d450f4a25aa42a417b444db2a4d13d37332423c63 not found: ID does not exist" containerID="36e05c6d84696ead0dc55c5d450f4a25aa42a417b444db2a4d13d37332423c63" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.237789 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e05c6d84696ead0dc55c5d450f4a25aa42a417b444db2a4d13d37332423c63"} err="failed to get container status \"36e05c6d84696ead0dc55c5d450f4a25aa42a417b444db2a4d13d37332423c63\": rpc error: code = NotFound desc = could not find container \"36e05c6d84696ead0dc55c5d450f4a25aa42a417b444db2a4d13d37332423c63\": container with ID starting with 36e05c6d84696ead0dc55c5d450f4a25aa42a417b444db2a4d13d37332423c63 not found: ID does not exist" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.237818 4835 scope.go:117] "RemoveContainer" containerID="db0f7e65bc107fff87939f1caa32038ad18c78686fcc1e19a2f96f28e5ebde92" Feb 16 15:12:20 crc kubenswrapper[4835]: E0216 15:12:20.238278 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db0f7e65bc107fff87939f1caa32038ad18c78686fcc1e19a2f96f28e5ebde92\": container with ID starting with db0f7e65bc107fff87939f1caa32038ad18c78686fcc1e19a2f96f28e5ebde92 not found: ID does not exist" containerID="db0f7e65bc107fff87939f1caa32038ad18c78686fcc1e19a2f96f28e5ebde92" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.238313 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0f7e65bc107fff87939f1caa32038ad18c78686fcc1e19a2f96f28e5ebde92"} err="failed to get container status \"db0f7e65bc107fff87939f1caa32038ad18c78686fcc1e19a2f96f28e5ebde92\": rpc error: code = NotFound desc = could not find container \"db0f7e65bc107fff87939f1caa32038ad18c78686fcc1e19a2f96f28e5ebde92\": container with ID starting with db0f7e65bc107fff87939f1caa32038ad18c78686fcc1e19a2f96f28e5ebde92 not found: ID does not exist" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.238341 4835 scope.go:117] "RemoveContainer" containerID="76f43528ce999acd704dc93a6da3489af0bfe0c6a0cf6dacbbe43e6b282895a7" Feb 16 15:12:20 crc kubenswrapper[4835]: E0216 15:12:20.238857 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76f43528ce999acd704dc93a6da3489af0bfe0c6a0cf6dacbbe43e6b282895a7\": container with ID starting with 76f43528ce999acd704dc93a6da3489af0bfe0c6a0cf6dacbbe43e6b282895a7 not found: ID does not exist" containerID="76f43528ce999acd704dc93a6da3489af0bfe0c6a0cf6dacbbe43e6b282895a7" Feb 16 15:12:20 crc kubenswrapper[4835]: I0216 15:12:20.238898 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f43528ce999acd704dc93a6da3489af0bfe0c6a0cf6dacbbe43e6b282895a7"} err="failed to get container status \"76f43528ce999acd704dc93a6da3489af0bfe0c6a0cf6dacbbe43e6b282895a7\": rpc error: code = NotFound desc = could not find container \"76f43528ce999acd704dc93a6da3489af0bfe0c6a0cf6dacbbe43e6b282895a7\": container with ID starting with 76f43528ce999acd704dc93a6da3489af0bfe0c6a0cf6dacbbe43e6b282895a7 not found: ID does not exist" Feb 16 15:12:21 crc kubenswrapper[4835]: I0216 15:12:21.013307 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l28fs" event={"ID":"bbc62417-6a2f-4620-acfa-7c2fac9e4c42","Type":"ContainerStarted","Data":"02670506216ae93c354b7e7dab278bec7bb199047c84a9eeb650401bf63fc077"} Feb 16 15:12:21 crc kubenswrapper[4835]: I0216 15:12:21.013359 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-l28fs" event={"ID":"bbc62417-6a2f-4620-acfa-7c2fac9e4c42","Type":"ContainerStarted","Data":"b9800de8ff0c6303ed44e480396d98dc39ecefea5691818ff3e034392b5cc17e"} Feb 16 15:12:21 crc kubenswrapper[4835]: I0216 15:12:21.013515 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-l28fs" Feb 16 15:12:21 crc kubenswrapper[4835]: I0216 15:12:21.019651 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-l28fs" Feb 16 15:12:21 crc kubenswrapper[4835]: I0216 15:12:21.026794 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-l28fs" podStartSLOduration=2.026776036 podStartE2EDuration="2.026776036s" podCreationTimestamp="2026-02-16 15:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:12:21.026686874 +0000 UTC m=+290.318679769" watchObservedRunningTime="2026-02-16 15:12:21.026776036 +0000 UTC m=+290.318768931" Feb 16 15:12:21 crc kubenswrapper[4835]: I0216 15:12:21.386988 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0132c288-a83e-4f3c-b620-4cac59f56df9" path="/var/lib/kubelet/pods/0132c288-a83e-4f3c-b620-4cac59f56df9/volumes" Feb 16 15:12:21 crc kubenswrapper[4835]: I0216 15:12:21.388205 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61564e44-b4e6-4a57-9232-3403b0173aa6" path="/var/lib/kubelet/pods/61564e44-b4e6-4a57-9232-3403b0173aa6/volumes" Feb 16 15:12:21 crc kubenswrapper[4835]: I0216 15:12:21.389141 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="822c5e9d-78fa-4c80-b3f4-e3a0310020a2" path="/var/lib/kubelet/pods/822c5e9d-78fa-4c80-b3f4-e3a0310020a2/volumes" Feb 16 15:12:21 crc kubenswrapper[4835]: I0216 15:12:21.390594 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21c247b-8282-4ea0-aaac-cd2908a9cfac" path="/var/lib/kubelet/pods/c21c247b-8282-4ea0-aaac-cd2908a9cfac/volumes" Feb 16 15:12:21 crc kubenswrapper[4835]: I0216 15:12:21.391166 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1fe6dd1-829b-4120-8585-040e9032f292" path="/var/lib/kubelet/pods/e1fe6dd1-829b-4120-8585-040e9032f292/volumes" Feb 16 15:12:31 crc kubenswrapper[4835]: I0216 15:12:31.165689 4835 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 16 15:13:18 crc kubenswrapper[4835]: I0216 15:13:18.586307 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:13:18 crc kubenswrapper[4835]: I0216 15:13:18.586896 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.351555 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4qwnq"] Feb 16 15:13:34 crc kubenswrapper[4835]: E0216 15:13:34.352185 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fe6dd1-829b-4120-8585-040e9032f292" containerName="extract-utilities" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352196 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fe6dd1-829b-4120-8585-040e9032f292" containerName="extract-utilities" Feb 16 15:13:34 crc kubenswrapper[4835]: E0216 15:13:34.352206 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0132c288-a83e-4f3c-b620-4cac59f56df9" containerName="extract-utilities" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352212 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0132c288-a83e-4f3c-b620-4cac59f56df9" containerName="extract-utilities" Feb 16 15:13:34 crc kubenswrapper[4835]: E0216 15:13:34.352219 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61564e44-b4e6-4a57-9232-3403b0173aa6" containerName="registry-server" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352226 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="61564e44-b4e6-4a57-9232-3403b0173aa6" containerName="registry-server" Feb 16 15:13:34 crc kubenswrapper[4835]: E0216 15:13:34.352240 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822c5e9d-78fa-4c80-b3f4-e3a0310020a2" containerName="extract-content" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352246 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="822c5e9d-78fa-4c80-b3f4-e3a0310020a2" containerName="extract-content" Feb 16 15:13:34 crc kubenswrapper[4835]: E0216 15:13:34.352256 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fe6dd1-829b-4120-8585-040e9032f292" containerName="extract-content" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352262 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fe6dd1-829b-4120-8585-040e9032f292" containerName="extract-content" Feb 16 15:13:34 crc kubenswrapper[4835]: E0216 15:13:34.352269 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822c5e9d-78fa-4c80-b3f4-e3a0310020a2" containerName="registry-server" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352274 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="822c5e9d-78fa-4c80-b3f4-e3a0310020a2" containerName="registry-server" Feb 16 15:13:34 crc kubenswrapper[4835]: E0216 15:13:34.352281 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21c247b-8282-4ea0-aaac-cd2908a9cfac" containerName="marketplace-operator" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352287 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21c247b-8282-4ea0-aaac-cd2908a9cfac" containerName="marketplace-operator" Feb 16 15:13:34 crc kubenswrapper[4835]: E0216 15:13:34.352295 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822c5e9d-78fa-4c80-b3f4-e3a0310020a2" containerName="extract-utilities" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352300 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="822c5e9d-78fa-4c80-b3f4-e3a0310020a2" containerName="extract-utilities" Feb 16 15:13:34 crc kubenswrapper[4835]: E0216 15:13:34.352308 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0132c288-a83e-4f3c-b620-4cac59f56df9" containerName="registry-server" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352314 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0132c288-a83e-4f3c-b620-4cac59f56df9" containerName="registry-server" Feb 16 15:13:34 crc kubenswrapper[4835]: E0216 15:13:34.352324 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61564e44-b4e6-4a57-9232-3403b0173aa6" containerName="extract-utilities" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352331 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="61564e44-b4e6-4a57-9232-3403b0173aa6" containerName="extract-utilities" Feb 16 15:13:34 crc kubenswrapper[4835]: E0216 15:13:34.352340 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0132c288-a83e-4f3c-b620-4cac59f56df9" containerName="extract-content" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352346 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0132c288-a83e-4f3c-b620-4cac59f56df9" containerName="extract-content" Feb 16 15:13:34 crc kubenswrapper[4835]: E0216 15:13:34.352354 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fe6dd1-829b-4120-8585-040e9032f292" containerName="registry-server" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352359 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fe6dd1-829b-4120-8585-040e9032f292" containerName="registry-server" Feb 16 15:13:34 crc kubenswrapper[4835]: E0216 15:13:34.352366 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61564e44-b4e6-4a57-9232-3403b0173aa6" containerName="extract-content" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352371 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="61564e44-b4e6-4a57-9232-3403b0173aa6" containerName="extract-content" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352465 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="0132c288-a83e-4f3c-b620-4cac59f56df9" containerName="registry-server" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352474 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fe6dd1-829b-4120-8585-040e9032f292" containerName="registry-server" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352487 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="822c5e9d-78fa-4c80-b3f4-e3a0310020a2" containerName="registry-server" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352495 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21c247b-8282-4ea0-aaac-cd2908a9cfac" containerName="marketplace-operator" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.352504 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="61564e44-b4e6-4a57-9232-3403b0173aa6" containerName="registry-server" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.353134 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4qwnq" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.357475 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.359084 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c174b939-7427-41a4-8178-89525fb0186d-catalog-content\") pod \"community-operators-4qwnq\" (UID: \"c174b939-7427-41a4-8178-89525fb0186d\") " pod="openshift-marketplace/community-operators-4qwnq" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.359212 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c174b939-7427-41a4-8178-89525fb0186d-utilities\") pod \"community-operators-4qwnq\" (UID: \"c174b939-7427-41a4-8178-89525fb0186d\") " pod="openshift-marketplace/community-operators-4qwnq" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.359274 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k424k\" (UniqueName: \"kubernetes.io/projected/c174b939-7427-41a4-8178-89525fb0186d-kube-api-access-k424k\") pod \"community-operators-4qwnq\" (UID: \"c174b939-7427-41a4-8178-89525fb0186d\") " pod="openshift-marketplace/community-operators-4qwnq" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.364714 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4qwnq"] Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.460872 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c174b939-7427-41a4-8178-89525fb0186d-utilities\") pod \"community-operators-4qwnq\" (UID: \"c174b939-7427-41a4-8178-89525fb0186d\") " pod="openshift-marketplace/community-operators-4qwnq" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.461382 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k424k\" (UniqueName: \"kubernetes.io/projected/c174b939-7427-41a4-8178-89525fb0186d-kube-api-access-k424k\") pod \"community-operators-4qwnq\" (UID: \"c174b939-7427-41a4-8178-89525fb0186d\") " pod="openshift-marketplace/community-operators-4qwnq" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.461962 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c174b939-7427-41a4-8178-89525fb0186d-utilities\") pod \"community-operators-4qwnq\" (UID: \"c174b939-7427-41a4-8178-89525fb0186d\") " pod="openshift-marketplace/community-operators-4qwnq" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.461947 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c174b939-7427-41a4-8178-89525fb0186d-catalog-content\") pod \"community-operators-4qwnq\" (UID: \"c174b939-7427-41a4-8178-89525fb0186d\") " pod="openshift-marketplace/community-operators-4qwnq" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.464663 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c174b939-7427-41a4-8178-89525fb0186d-catalog-content\") pod \"community-operators-4qwnq\" (UID: \"c174b939-7427-41a4-8178-89525fb0186d\") " pod="openshift-marketplace/community-operators-4qwnq" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.487452 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k424k\" (UniqueName: \"kubernetes.io/projected/c174b939-7427-41a4-8178-89525fb0186d-kube-api-access-k424k\") pod \"community-operators-4qwnq\" (UID: \"c174b939-7427-41a4-8178-89525fb0186d\") " pod="openshift-marketplace/community-operators-4qwnq" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.547500 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zx6gk"] Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.548683 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zx6gk" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.551108 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.563145 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zx6gk"] Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.563768 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa97f53-b6fe-497f-b2b4-fded6b7a9285-catalog-content\") pod \"certified-operators-zx6gk\" (UID: \"0fa97f53-b6fe-497f-b2b4-fded6b7a9285\") " pod="openshift-marketplace/certified-operators-zx6gk" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.563885 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa97f53-b6fe-497f-b2b4-fded6b7a9285-utilities\") pod \"certified-operators-zx6gk\" (UID: \"0fa97f53-b6fe-497f-b2b4-fded6b7a9285\") " pod="openshift-marketplace/certified-operators-zx6gk" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.563938 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk4fl\" (UniqueName: \"kubernetes.io/projected/0fa97f53-b6fe-497f-b2b4-fded6b7a9285-kube-api-access-wk4fl\") pod \"certified-operators-zx6gk\" (UID: \"0fa97f53-b6fe-497f-b2b4-fded6b7a9285\") " pod="openshift-marketplace/certified-operators-zx6gk" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.664740 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa97f53-b6fe-497f-b2b4-fded6b7a9285-utilities\") pod \"certified-operators-zx6gk\" (UID: \"0fa97f53-b6fe-497f-b2b4-fded6b7a9285\") " pod="openshift-marketplace/certified-operators-zx6gk" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.664835 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk4fl\" (UniqueName: \"kubernetes.io/projected/0fa97f53-b6fe-497f-b2b4-fded6b7a9285-kube-api-access-wk4fl\") pod \"certified-operators-zx6gk\" (UID: \"0fa97f53-b6fe-497f-b2b4-fded6b7a9285\") " pod="openshift-marketplace/certified-operators-zx6gk" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.664895 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa97f53-b6fe-497f-b2b4-fded6b7a9285-catalog-content\") pod \"certified-operators-zx6gk\" (UID: \"0fa97f53-b6fe-497f-b2b4-fded6b7a9285\") " pod="openshift-marketplace/certified-operators-zx6gk" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.665277 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa97f53-b6fe-497f-b2b4-fded6b7a9285-utilities\") pod \"certified-operators-zx6gk\" (UID: \"0fa97f53-b6fe-497f-b2b4-fded6b7a9285\") " pod="openshift-marketplace/certified-operators-zx6gk" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.665383 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa97f53-b6fe-497f-b2b4-fded6b7a9285-catalog-content\") pod \"certified-operators-zx6gk\" (UID: \"0fa97f53-b6fe-497f-b2b4-fded6b7a9285\") " pod="openshift-marketplace/certified-operators-zx6gk" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.675438 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4qwnq" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.687443 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk4fl\" (UniqueName: \"kubernetes.io/projected/0fa97f53-b6fe-497f-b2b4-fded6b7a9285-kube-api-access-wk4fl\") pod \"certified-operators-zx6gk\" (UID: \"0fa97f53-b6fe-497f-b2b4-fded6b7a9285\") " pod="openshift-marketplace/certified-operators-zx6gk" Feb 16 15:13:34 crc kubenswrapper[4835]: I0216 15:13:34.862885 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zx6gk" Feb 16 15:13:35 crc kubenswrapper[4835]: I0216 15:13:35.091625 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zx6gk"] Feb 16 15:13:35 crc kubenswrapper[4835]: I0216 15:13:35.094058 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4qwnq"] Feb 16 15:13:35 crc kubenswrapper[4835]: I0216 15:13:35.391657 4835 generic.go:334] "Generic (PLEG): container finished" podID="0fa97f53-b6fe-497f-b2b4-fded6b7a9285" containerID="b74e590cfe673bf8c5b9e3d05d8ff716da0920e2cb3187fd6c4ec429735c62ee" exitCode=0 Feb 16 15:13:35 crc kubenswrapper[4835]: I0216 15:13:35.391751 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx6gk" event={"ID":"0fa97f53-b6fe-497f-b2b4-fded6b7a9285","Type":"ContainerDied","Data":"b74e590cfe673bf8c5b9e3d05d8ff716da0920e2cb3187fd6c4ec429735c62ee"} Feb 16 15:13:35 crc kubenswrapper[4835]: I0216 15:13:35.391798 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx6gk" event={"ID":"0fa97f53-b6fe-497f-b2b4-fded6b7a9285","Type":"ContainerStarted","Data":"6b6788bf30272c8e972c5d609aebe64a80bba75a365ebf63e33ba076df185e5f"} Feb 16 15:13:35 crc kubenswrapper[4835]: I0216 15:13:35.393113 4835 generic.go:334] "Generic (PLEG): container finished" podID="c174b939-7427-41a4-8178-89525fb0186d" containerID="7ad903a35421658c48ec0709cd23b8043ef53c9b83cfff51ab3f441660e98832" exitCode=0 Feb 16 15:13:35 crc kubenswrapper[4835]: I0216 15:13:35.393153 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qwnq" event={"ID":"c174b939-7427-41a4-8178-89525fb0186d","Type":"ContainerDied","Data":"7ad903a35421658c48ec0709cd23b8043ef53c9b83cfff51ab3f441660e98832"} Feb 16 15:13:35 crc kubenswrapper[4835]: I0216 15:13:35.393181 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qwnq" event={"ID":"c174b939-7427-41a4-8178-89525fb0186d","Type":"ContainerStarted","Data":"89a375a002a76d7b4f99fa00efbdaf45d5935bee323bd38a2fc221600af5c5b9"} Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.400150 4835 generic.go:334] "Generic (PLEG): container finished" podID="0fa97f53-b6fe-497f-b2b4-fded6b7a9285" containerID="22b28d5a4b7e89591b2e5f9fc5bad220918a0dc7129ec650da88f93a61eb6a3a" exitCode=0 Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.400220 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx6gk" event={"ID":"0fa97f53-b6fe-497f-b2b4-fded6b7a9285","Type":"ContainerDied","Data":"22b28d5a4b7e89591b2e5f9fc5bad220918a0dc7129ec650da88f93a61eb6a3a"} Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.402165 4835 generic.go:334] "Generic (PLEG): container finished" podID="c174b939-7427-41a4-8178-89525fb0186d" containerID="010644d0cc0a5a434ae431a7c687e8e592ca9cc900f78f0eb03cc3e2041af8ca" exitCode=0 Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.402193 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qwnq" event={"ID":"c174b939-7427-41a4-8178-89525fb0186d","Type":"ContainerDied","Data":"010644d0cc0a5a434ae431a7c687e8e592ca9cc900f78f0eb03cc3e2041af8ca"} Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.755357 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-42xkj"] Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.756239 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-42xkj" Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.764208 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.772767 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-42xkj"] Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.894291 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71e123f-9ed2-44f0-806c-888cd24c0c54-catalog-content\") pod \"redhat-marketplace-42xkj\" (UID: \"f71e123f-9ed2-44f0-806c-888cd24c0c54\") " pod="openshift-marketplace/redhat-marketplace-42xkj" Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.894348 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71e123f-9ed2-44f0-806c-888cd24c0c54-utilities\") pod \"redhat-marketplace-42xkj\" (UID: \"f71e123f-9ed2-44f0-806c-888cd24c0c54\") " pod="openshift-marketplace/redhat-marketplace-42xkj" Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.894399 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s887c\" (UniqueName: \"kubernetes.io/projected/f71e123f-9ed2-44f0-806c-888cd24c0c54-kube-api-access-s887c\") pod \"redhat-marketplace-42xkj\" (UID: \"f71e123f-9ed2-44f0-806c-888cd24c0c54\") " pod="openshift-marketplace/redhat-marketplace-42xkj" Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.941264 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4252j"] Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.942634 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4252j" Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.946025 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.952148 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4252j"] Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.995325 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71e123f-9ed2-44f0-806c-888cd24c0c54-utilities\") pod \"redhat-marketplace-42xkj\" (UID: \"f71e123f-9ed2-44f0-806c-888cd24c0c54\") " pod="openshift-marketplace/redhat-marketplace-42xkj" Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.995542 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s887c\" (UniqueName: \"kubernetes.io/projected/f71e123f-9ed2-44f0-806c-888cd24c0c54-kube-api-access-s887c\") pod \"redhat-marketplace-42xkj\" (UID: \"f71e123f-9ed2-44f0-806c-888cd24c0c54\") " pod="openshift-marketplace/redhat-marketplace-42xkj" Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.995621 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71e123f-9ed2-44f0-806c-888cd24c0c54-catalog-content\") pod \"redhat-marketplace-42xkj\" (UID: \"f71e123f-9ed2-44f0-806c-888cd24c0c54\") " pod="openshift-marketplace/redhat-marketplace-42xkj" Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.996049 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f71e123f-9ed2-44f0-806c-888cd24c0c54-utilities\") pod \"redhat-marketplace-42xkj\" (UID: \"f71e123f-9ed2-44f0-806c-888cd24c0c54\") " pod="openshift-marketplace/redhat-marketplace-42xkj" Feb 16 15:13:36 crc kubenswrapper[4835]: I0216 15:13:36.996067 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f71e123f-9ed2-44f0-806c-888cd24c0c54-catalog-content\") pod \"redhat-marketplace-42xkj\" (UID: \"f71e123f-9ed2-44f0-806c-888cd24c0c54\") " pod="openshift-marketplace/redhat-marketplace-42xkj" Feb 16 15:13:37 crc kubenswrapper[4835]: I0216 15:13:37.015984 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s887c\" (UniqueName: \"kubernetes.io/projected/f71e123f-9ed2-44f0-806c-888cd24c0c54-kube-api-access-s887c\") pod \"redhat-marketplace-42xkj\" (UID: \"f71e123f-9ed2-44f0-806c-888cd24c0c54\") " pod="openshift-marketplace/redhat-marketplace-42xkj" Feb 16 15:13:37 crc kubenswrapper[4835]: I0216 15:13:37.073493 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-42xkj" Feb 16 15:13:37 crc kubenswrapper[4835]: I0216 15:13:37.096354 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9fq4\" (UniqueName: \"kubernetes.io/projected/f58c4632-6f14-420c-b220-e362cfbf7208-kube-api-access-r9fq4\") pod \"redhat-operators-4252j\" (UID: \"f58c4632-6f14-420c-b220-e362cfbf7208\") " pod="openshift-marketplace/redhat-operators-4252j" Feb 16 15:13:37 crc kubenswrapper[4835]: I0216 15:13:37.096409 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58c4632-6f14-420c-b220-e362cfbf7208-catalog-content\") pod \"redhat-operators-4252j\" (UID: \"f58c4632-6f14-420c-b220-e362cfbf7208\") " pod="openshift-marketplace/redhat-operators-4252j" Feb 16 15:13:37 crc kubenswrapper[4835]: I0216 15:13:37.096429 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58c4632-6f14-420c-b220-e362cfbf7208-utilities\") pod \"redhat-operators-4252j\" (UID: \"f58c4632-6f14-420c-b220-e362cfbf7208\") " pod="openshift-marketplace/redhat-operators-4252j" Feb 16 15:13:37 crc kubenswrapper[4835]: I0216 15:13:37.198094 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58c4632-6f14-420c-b220-e362cfbf7208-utilities\") pod \"redhat-operators-4252j\" (UID: \"f58c4632-6f14-420c-b220-e362cfbf7208\") " pod="openshift-marketplace/redhat-operators-4252j" Feb 16 15:13:37 crc kubenswrapper[4835]: I0216 15:13:37.198474 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9fq4\" (UniqueName: \"kubernetes.io/projected/f58c4632-6f14-420c-b220-e362cfbf7208-kube-api-access-r9fq4\") pod \"redhat-operators-4252j\" (UID: \"f58c4632-6f14-420c-b220-e362cfbf7208\") " pod="openshift-marketplace/redhat-operators-4252j" Feb 16 15:13:37 crc kubenswrapper[4835]: I0216 15:13:37.198503 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58c4632-6f14-420c-b220-e362cfbf7208-catalog-content\") pod \"redhat-operators-4252j\" (UID: \"f58c4632-6f14-420c-b220-e362cfbf7208\") " pod="openshift-marketplace/redhat-operators-4252j" Feb 16 15:13:37 crc kubenswrapper[4835]: I0216 15:13:37.198689 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58c4632-6f14-420c-b220-e362cfbf7208-utilities\") pod \"redhat-operators-4252j\" (UID: \"f58c4632-6f14-420c-b220-e362cfbf7208\") " pod="openshift-marketplace/redhat-operators-4252j" Feb 16 15:13:37 crc kubenswrapper[4835]: I0216 15:13:37.198844 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58c4632-6f14-420c-b220-e362cfbf7208-catalog-content\") pod \"redhat-operators-4252j\" (UID: \"f58c4632-6f14-420c-b220-e362cfbf7208\") " pod="openshift-marketplace/redhat-operators-4252j" Feb 16 15:13:37 crc kubenswrapper[4835]: I0216 15:13:37.220427 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9fq4\" (UniqueName: \"kubernetes.io/projected/f58c4632-6f14-420c-b220-e362cfbf7208-kube-api-access-r9fq4\") pod \"redhat-operators-4252j\" (UID: \"f58c4632-6f14-420c-b220-e362cfbf7208\") " pod="openshift-marketplace/redhat-operators-4252j" Feb 16 15:13:37 crc kubenswrapper[4835]: I0216 15:13:37.272278 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4252j" Feb 16 15:13:37 crc kubenswrapper[4835]: I0216 15:13:37.411294 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zx6gk" event={"ID":"0fa97f53-b6fe-497f-b2b4-fded6b7a9285","Type":"ContainerStarted","Data":"206e6647e3e98f336538ed60e5f75de89d559dce33ce26b0ba39fce8c310d62e"} Feb 16 15:13:37 crc kubenswrapper[4835]: I0216 15:13:37.429111 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zx6gk" podStartSLOduration=1.990390833 podStartE2EDuration="3.429097564s" podCreationTimestamp="2026-02-16 15:13:34 +0000 UTC" firstStartedPulling="2026-02-16 15:13:35.393698709 +0000 UTC m=+364.685691604" lastFinishedPulling="2026-02-16 15:13:36.83240544 +0000 UTC m=+366.124398335" observedRunningTime="2026-02-16 15:13:37.427824538 +0000 UTC m=+366.719817433" watchObservedRunningTime="2026-02-16 15:13:37.429097564 +0000 UTC m=+366.721090459" Feb 16 15:13:37 crc kubenswrapper[4835]: I0216 15:13:37.466191 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4252j"] Feb 16 15:13:37 crc kubenswrapper[4835]: W0216 15:13:37.473803 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf58c4632_6f14_420c_b220_e362cfbf7208.slice/crio-f472d9d4732214eab8058c01e8d272474b910dd77a4ec1237e142024f6bad160 WatchSource:0}: Error finding container f472d9d4732214eab8058c01e8d272474b910dd77a4ec1237e142024f6bad160: Status 404 returned error can't find the container with id f472d9d4732214eab8058c01e8d272474b910dd77a4ec1237e142024f6bad160 Feb 16 15:13:37 crc kubenswrapper[4835]: I0216 15:13:37.545717 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-42xkj"] Feb 16 15:13:37 crc kubenswrapper[4835]: W0216 15:13:37.551259 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf71e123f_9ed2_44f0_806c_888cd24c0c54.slice/crio-30c3f59cceebdb3814ee39d198ca78b0e4f48b4762dd07202340c32b8b043a0b WatchSource:0}: Error finding container 30c3f59cceebdb3814ee39d198ca78b0e4f48b4762dd07202340c32b8b043a0b: Status 404 returned error can't find the container with id 30c3f59cceebdb3814ee39d198ca78b0e4f48b4762dd07202340c32b8b043a0b Feb 16 15:13:38 crc kubenswrapper[4835]: I0216 15:13:38.418123 4835 generic.go:334] "Generic (PLEG): container finished" podID="f71e123f-9ed2-44f0-806c-888cd24c0c54" containerID="7576e17cd99a4f18454512e20ebc508fd51cb7da37dcf56c0e54b539f12cf287" exitCode=0 Feb 16 15:13:38 crc kubenswrapper[4835]: I0216 15:13:38.418347 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42xkj" event={"ID":"f71e123f-9ed2-44f0-806c-888cd24c0c54","Type":"ContainerDied","Data":"7576e17cd99a4f18454512e20ebc508fd51cb7da37dcf56c0e54b539f12cf287"} Feb 16 15:13:38 crc kubenswrapper[4835]: I0216 15:13:38.419077 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42xkj" event={"ID":"f71e123f-9ed2-44f0-806c-888cd24c0c54","Type":"ContainerStarted","Data":"30c3f59cceebdb3814ee39d198ca78b0e4f48b4762dd07202340c32b8b043a0b"} Feb 16 15:13:38 crc kubenswrapper[4835]: I0216 15:13:38.420974 4835 generic.go:334] "Generic (PLEG): container finished" podID="f58c4632-6f14-420c-b220-e362cfbf7208" containerID="a5e7ce824510b163758b31a7b03363e525427aeb98c199845d70b7d5c4cd8262" exitCode=0 Feb 16 15:13:38 crc kubenswrapper[4835]: I0216 15:13:38.421655 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4252j" event={"ID":"f58c4632-6f14-420c-b220-e362cfbf7208","Type":"ContainerDied","Data":"a5e7ce824510b163758b31a7b03363e525427aeb98c199845d70b7d5c4cd8262"} Feb 16 15:13:38 crc kubenswrapper[4835]: I0216 15:13:38.421693 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4252j" event={"ID":"f58c4632-6f14-420c-b220-e362cfbf7208","Type":"ContainerStarted","Data":"f472d9d4732214eab8058c01e8d272474b910dd77a4ec1237e142024f6bad160"} Feb 16 15:13:40 crc kubenswrapper[4835]: I0216 15:13:40.439353 4835 generic.go:334] "Generic (PLEG): container finished" podID="f71e123f-9ed2-44f0-806c-888cd24c0c54" containerID="8040b6c80b96f6d6c7955001128e51f0d145f7c05cb667532abb1958b0bb1436" exitCode=0 Feb 16 15:13:40 crc kubenswrapper[4835]: I0216 15:13:40.439454 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42xkj" event={"ID":"f71e123f-9ed2-44f0-806c-888cd24c0c54","Type":"ContainerDied","Data":"8040b6c80b96f6d6c7955001128e51f0d145f7c05cb667532abb1958b0bb1436"} Feb 16 15:13:40 crc kubenswrapper[4835]: I0216 15:13:40.442921 4835 generic.go:334] "Generic (PLEG): container finished" podID="f58c4632-6f14-420c-b220-e362cfbf7208" containerID="586629a5a9ce3997561a6925eac5be7706f5c397b4d9eeb45f425a98fab4bfdc" exitCode=0 Feb 16 15:13:40 crc kubenswrapper[4835]: I0216 15:13:40.442959 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4252j" event={"ID":"f58c4632-6f14-420c-b220-e362cfbf7208","Type":"ContainerDied","Data":"586629a5a9ce3997561a6925eac5be7706f5c397b4d9eeb45f425a98fab4bfdc"} Feb 16 15:13:41 crc kubenswrapper[4835]: I0216 15:13:41.450186 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-42xkj" event={"ID":"f71e123f-9ed2-44f0-806c-888cd24c0c54","Type":"ContainerStarted","Data":"880014e7da3257b12c51d2953d272fb5ab30adb6ce222305e0f880fe070a83b7"} Feb 16 15:13:41 crc kubenswrapper[4835]: I0216 15:13:41.452742 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4252j" event={"ID":"f58c4632-6f14-420c-b220-e362cfbf7208","Type":"ContainerStarted","Data":"8d6841a3fad6957a2f5cdc451d867597f2b52994962af07eb1cb7423acbc672d"} Feb 16 15:13:41 crc kubenswrapper[4835]: I0216 15:13:41.454683 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4qwnq" event={"ID":"c174b939-7427-41a4-8178-89525fb0186d","Type":"ContainerStarted","Data":"7308111d0a0131a47f2178ffc012af6cb4dfc0cd8c3f686b5822d28a9d571e91"} Feb 16 15:13:41 crc kubenswrapper[4835]: I0216 15:13:41.483697 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-42xkj" podStartSLOduration=2.978613829 podStartE2EDuration="5.483653411s" podCreationTimestamp="2026-02-16 15:13:36 +0000 UTC" firstStartedPulling="2026-02-16 15:13:38.420005185 +0000 UTC m=+367.711998080" lastFinishedPulling="2026-02-16 15:13:40.925044767 +0000 UTC m=+370.217037662" observedRunningTime="2026-02-16 15:13:41.475762351 +0000 UTC m=+370.767755276" watchObservedRunningTime="2026-02-16 15:13:41.483653411 +0000 UTC m=+370.775646306" Feb 16 15:13:41 crc kubenswrapper[4835]: I0216 15:13:41.493331 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4252j" podStartSLOduration=2.975278567 podStartE2EDuration="5.49331123s" podCreationTimestamp="2026-02-16 15:13:36 +0000 UTC" firstStartedPulling="2026-02-16 15:13:38.422254508 +0000 UTC m=+367.714247403" lastFinishedPulling="2026-02-16 15:13:40.940287171 +0000 UTC m=+370.232280066" observedRunningTime="2026-02-16 15:13:41.49186732 +0000 UTC m=+370.783860215" watchObservedRunningTime="2026-02-16 15:13:41.49331123 +0000 UTC m=+370.785304125" Feb 16 15:13:41 crc kubenswrapper[4835]: I0216 15:13:41.517129 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4qwnq" podStartSLOduration=2.042431742 podStartE2EDuration="7.517113383s" podCreationTimestamp="2026-02-16 15:13:34 +0000 UTC" firstStartedPulling="2026-02-16 15:13:35.394229104 +0000 UTC m=+364.686221999" lastFinishedPulling="2026-02-16 15:13:40.868910745 +0000 UTC m=+370.160903640" observedRunningTime="2026-02-16 15:13:41.514116699 +0000 UTC m=+370.806109604" watchObservedRunningTime="2026-02-16 15:13:41.517113383 +0000 UTC m=+370.809106278" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.386391 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k7rj5"] Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.387216 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.404593 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k7rj5"] Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.583742 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.583978 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/794738d8-c8a7-4a67-a737-0959c7ff8bf8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.584009 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/794738d8-c8a7-4a67-a737-0959c7ff8bf8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.584035 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/794738d8-c8a7-4a67-a737-0959c7ff8bf8-bound-sa-token\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.584054 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/794738d8-c8a7-4a67-a737-0959c7ff8bf8-registry-tls\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.584072 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/794738d8-c8a7-4a67-a737-0959c7ff8bf8-trusted-ca\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.584088 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-896l5\" (UniqueName: \"kubernetes.io/projected/794738d8-c8a7-4a67-a737-0959c7ff8bf8-kube-api-access-896l5\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.584153 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/794738d8-c8a7-4a67-a737-0959c7ff8bf8-registry-certificates\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.602564 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.684910 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/794738d8-c8a7-4a67-a737-0959c7ff8bf8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.684958 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/794738d8-c8a7-4a67-a737-0959c7ff8bf8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.684989 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/794738d8-c8a7-4a67-a737-0959c7ff8bf8-bound-sa-token\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.685006 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-896l5\" (UniqueName: \"kubernetes.io/projected/794738d8-c8a7-4a67-a737-0959c7ff8bf8-kube-api-access-896l5\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.685023 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/794738d8-c8a7-4a67-a737-0959c7ff8bf8-registry-tls\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.685039 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/794738d8-c8a7-4a67-a737-0959c7ff8bf8-trusted-ca\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.685067 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/794738d8-c8a7-4a67-a737-0959c7ff8bf8-registry-certificates\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.686292 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/794738d8-c8a7-4a67-a737-0959c7ff8bf8-registry-certificates\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.686661 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/794738d8-c8a7-4a67-a737-0959c7ff8bf8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.688831 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/794738d8-c8a7-4a67-a737-0959c7ff8bf8-trusted-ca\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.692772 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/794738d8-c8a7-4a67-a737-0959c7ff8bf8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.693785 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/794738d8-c8a7-4a67-a737-0959c7ff8bf8-registry-tls\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.703014 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/794738d8-c8a7-4a67-a737-0959c7ff8bf8-bound-sa-token\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:43 crc kubenswrapper[4835]: I0216 15:13:43.703424 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-896l5\" (UniqueName: \"kubernetes.io/projected/794738d8-c8a7-4a67-a737-0959c7ff8bf8-kube-api-access-896l5\") pod \"image-registry-66df7c8f76-k7rj5\" (UID: \"794738d8-c8a7-4a67-a737-0959c7ff8bf8\") " pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:44 crc kubenswrapper[4835]: I0216 15:13:44.003675 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:44 crc kubenswrapper[4835]: I0216 15:13:44.399498 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k7rj5"] Feb 16 15:13:44 crc kubenswrapper[4835]: I0216 15:13:44.482035 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" event={"ID":"794738d8-c8a7-4a67-a737-0959c7ff8bf8","Type":"ContainerStarted","Data":"089e3cf3dd585460849e00d73576c7ea7c07298dbd6cbdd55da818681fe60ff4"} Feb 16 15:13:44 crc kubenswrapper[4835]: I0216 15:13:44.677025 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4qwnq" Feb 16 15:13:44 crc kubenswrapper[4835]: I0216 15:13:44.677079 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4qwnq" Feb 16 15:13:44 crc kubenswrapper[4835]: I0216 15:13:44.727045 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4qwnq" Feb 16 15:13:44 crc kubenswrapper[4835]: I0216 15:13:44.863751 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zx6gk" Feb 16 15:13:44 crc kubenswrapper[4835]: I0216 15:13:44.863810 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zx6gk" Feb 16 15:13:44 crc kubenswrapper[4835]: I0216 15:13:44.905802 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zx6gk" Feb 16 15:13:45 crc kubenswrapper[4835]: I0216 15:13:45.488285 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" event={"ID":"794738d8-c8a7-4a67-a737-0959c7ff8bf8","Type":"ContainerStarted","Data":"9b4f9d59874f47592b72482381d89e2185faef806210b1a90e7079ef6f542164"} Feb 16 15:13:45 crc kubenswrapper[4835]: I0216 15:13:45.488622 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:13:45 crc kubenswrapper[4835]: I0216 15:13:45.508667 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" podStartSLOduration=2.508649645 podStartE2EDuration="2.508649645s" podCreationTimestamp="2026-02-16 15:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:45.505474967 +0000 UTC m=+374.797467862" watchObservedRunningTime="2026-02-16 15:13:45.508649645 +0000 UTC m=+374.800642530" Feb 16 15:13:45 crc kubenswrapper[4835]: I0216 15:13:45.533402 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zx6gk" Feb 16 15:13:47 crc kubenswrapper[4835]: I0216 15:13:47.074285 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-42xkj" Feb 16 15:13:47 crc kubenswrapper[4835]: I0216 15:13:47.074639 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-42xkj" Feb 16 15:13:47 crc kubenswrapper[4835]: I0216 15:13:47.109993 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-42xkj" Feb 16 15:13:47 crc kubenswrapper[4835]: I0216 15:13:47.277599 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4252j" Feb 16 15:13:47 crc kubenswrapper[4835]: I0216 15:13:47.277679 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4252j" Feb 16 15:13:47 crc kubenswrapper[4835]: I0216 15:13:47.323732 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4252j" Feb 16 15:13:47 crc kubenswrapper[4835]: I0216 15:13:47.538670 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4252j" Feb 16 15:13:47 crc kubenswrapper[4835]: I0216 15:13:47.555500 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-42xkj" Feb 16 15:13:48 crc kubenswrapper[4835]: I0216 15:13:48.587696 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:13:48 crc kubenswrapper[4835]: I0216 15:13:48.588337 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:13:54 crc kubenswrapper[4835]: I0216 15:13:54.714541 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4qwnq" Feb 16 15:14:04 crc kubenswrapper[4835]: I0216 15:14:04.009444 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-k7rj5" Feb 16 15:14:04 crc kubenswrapper[4835]: I0216 15:14:04.069560 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rxmc7"] Feb 16 15:14:15 crc kubenswrapper[4835]: I0216 15:14:15.641949 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-65ff5df46b-p9jlz"] Feb 16 15:14:18 crc kubenswrapper[4835]: I0216 15:14:18.586692 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:14:18 crc kubenswrapper[4835]: I0216 15:14:18.586755 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:14:18 crc kubenswrapper[4835]: I0216 15:14:18.586801 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:14:18 crc kubenswrapper[4835]: I0216 15:14:18.587363 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c55d3f5b42809c4991ad19df6589021934ee6a2792c9d5ee4984082ec22f35aa"} pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:14:18 crc kubenswrapper[4835]: I0216 15:14:18.587420 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" containerID="cri-o://c55d3f5b42809c4991ad19df6589021934ee6a2792c9d5ee4984082ec22f35aa" gracePeriod=600 Feb 16 15:14:19 crc kubenswrapper[4835]: I0216 15:14:19.665953 4835 generic.go:334] "Generic (PLEG): container finished" podID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerID="c55d3f5b42809c4991ad19df6589021934ee6a2792c9d5ee4984082ec22f35aa" exitCode=0 Feb 16 15:14:19 crc kubenswrapper[4835]: I0216 15:14:19.666021 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerDied","Data":"c55d3f5b42809c4991ad19df6589021934ee6a2792c9d5ee4984082ec22f35aa"} Feb 16 15:14:19 crc kubenswrapper[4835]: I0216 15:14:19.666384 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerStarted","Data":"1a28f0d6525f9971d77742b65377602c37eac99fbd17ba56f4ecbef96e8a8ccd"} Feb 16 15:14:19 crc kubenswrapper[4835]: I0216 15:14:19.666407 4835 scope.go:117] "RemoveContainer" containerID="cdca03b984d6f8135c346eccbd19d0ab214a783a813b6029a0634982b0c4f82b" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.108065 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" podUID="8cb8fe18-6040-4d23-a89b-e338df070e75" containerName="registry" containerID="cri-o://08ab02882c01925488fccb60e7fb0608dd036d4889035fc2029be80c11ca3ae6" gracePeriod=30 Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.510016 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.606877 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8cb8fe18-6040-4d23-a89b-e338df070e75-installation-pull-secrets\") pod \"8cb8fe18-6040-4d23-a89b-e338df070e75\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.606920 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-bound-sa-token\") pod \"8cb8fe18-6040-4d23-a89b-e338df070e75\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.606974 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-registry-tls\") pod \"8cb8fe18-6040-4d23-a89b-e338df070e75\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.607083 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8cb8fe18-6040-4d23-a89b-e338df070e75\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.607109 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8cb8fe18-6040-4d23-a89b-e338df070e75-ca-trust-extracted\") pod \"8cb8fe18-6040-4d23-a89b-e338df070e75\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.607139 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8djdq\" (UniqueName: \"kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-kube-api-access-8djdq\") pod \"8cb8fe18-6040-4d23-a89b-e338df070e75\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.607161 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8cb8fe18-6040-4d23-a89b-e338df070e75-registry-certificates\") pod \"8cb8fe18-6040-4d23-a89b-e338df070e75\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.607204 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8cb8fe18-6040-4d23-a89b-e338df070e75-trusted-ca\") pod \"8cb8fe18-6040-4d23-a89b-e338df070e75\" (UID: \"8cb8fe18-6040-4d23-a89b-e338df070e75\") " Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.608034 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cb8fe18-6040-4d23-a89b-e338df070e75-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8cb8fe18-6040-4d23-a89b-e338df070e75" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.609709 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cb8fe18-6040-4d23-a89b-e338df070e75-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8cb8fe18-6040-4d23-a89b-e338df070e75" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.613254 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-kube-api-access-8djdq" (OuterVolumeSpecName: "kube-api-access-8djdq") pod "8cb8fe18-6040-4d23-a89b-e338df070e75" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75"). InnerVolumeSpecName "kube-api-access-8djdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.613513 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8cb8fe18-6040-4d23-a89b-e338df070e75" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.614560 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cb8fe18-6040-4d23-a89b-e338df070e75-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8cb8fe18-6040-4d23-a89b-e338df070e75" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.615464 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8cb8fe18-6040-4d23-a89b-e338df070e75" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.628384 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "8cb8fe18-6040-4d23-a89b-e338df070e75" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.646674 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cb8fe18-6040-4d23-a89b-e338df070e75-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8cb8fe18-6040-4d23-a89b-e338df070e75" (UID: "8cb8fe18-6040-4d23-a89b-e338df070e75"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.708221 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8cb8fe18-6040-4d23-a89b-e338df070e75-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.708248 4835 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8cb8fe18-6040-4d23-a89b-e338df070e75-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.708258 4835 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.708267 4835 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.708275 4835 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8cb8fe18-6040-4d23-a89b-e338df070e75-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.708283 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8djdq\" (UniqueName: \"kubernetes.io/projected/8cb8fe18-6040-4d23-a89b-e338df070e75-kube-api-access-8djdq\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.708290 4835 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8cb8fe18-6040-4d23-a89b-e338df070e75-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.720254 4835 generic.go:334] "Generic (PLEG): container finished" podID="8cb8fe18-6040-4d23-a89b-e338df070e75" containerID="08ab02882c01925488fccb60e7fb0608dd036d4889035fc2029be80c11ca3ae6" exitCode=0 Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.720293 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" event={"ID":"8cb8fe18-6040-4d23-a89b-e338df070e75","Type":"ContainerDied","Data":"08ab02882c01925488fccb60e7fb0608dd036d4889035fc2029be80c11ca3ae6"} Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.720320 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" event={"ID":"8cb8fe18-6040-4d23-a89b-e338df070e75","Type":"ContainerDied","Data":"a90502c543c3682ca09092ed2389c380cf99bc25ec7f1478fc2c41619f19c145"} Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.720338 4835 scope.go:117] "RemoveContainer" containerID="08ab02882c01925488fccb60e7fb0608dd036d4889035fc2029be80c11ca3ae6" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.720371 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rxmc7" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.745128 4835 scope.go:117] "RemoveContainer" containerID="08ab02882c01925488fccb60e7fb0608dd036d4889035fc2029be80c11ca3ae6" Feb 16 15:14:29 crc kubenswrapper[4835]: E0216 15:14:29.746820 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08ab02882c01925488fccb60e7fb0608dd036d4889035fc2029be80c11ca3ae6\": container with ID starting with 08ab02882c01925488fccb60e7fb0608dd036d4889035fc2029be80c11ca3ae6 not found: ID does not exist" containerID="08ab02882c01925488fccb60e7fb0608dd036d4889035fc2029be80c11ca3ae6" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.746871 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08ab02882c01925488fccb60e7fb0608dd036d4889035fc2029be80c11ca3ae6"} err="failed to get container status \"08ab02882c01925488fccb60e7fb0608dd036d4889035fc2029be80c11ca3ae6\": rpc error: code = NotFound desc = could not find container \"08ab02882c01925488fccb60e7fb0608dd036d4889035fc2029be80c11ca3ae6\": container with ID starting with 08ab02882c01925488fccb60e7fb0608dd036d4889035fc2029be80c11ca3ae6 not found: ID does not exist" Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.750366 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rxmc7"] Feb 16 15:14:29 crc kubenswrapper[4835]: I0216 15:14:29.754876 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rxmc7"] Feb 16 15:14:31 crc kubenswrapper[4835]: I0216 15:14:31.384278 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cb8fe18-6040-4d23-a89b-e338df070e75" path="/var/lib/kubelet/pods/8cb8fe18-6040-4d23-a89b-e338df070e75/volumes" Feb 16 15:14:40 crc kubenswrapper[4835]: I0216 15:14:40.664781 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" podUID="7e597add-6065-48a9-85e1-06530d981505" containerName="oauth-openshift" containerID="cri-o://194d8797e9bcdf86ea818e8f9f314f7a196d906934a3747a2626162dab8bce04" gracePeriod=15 Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.035480 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.066436 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb"] Feb 16 15:14:41 crc kubenswrapper[4835]: E0216 15:14:41.066705 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb8fe18-6040-4d23-a89b-e338df070e75" containerName="registry" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.066727 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb8fe18-6040-4d23-a89b-e338df070e75" containerName="registry" Feb 16 15:14:41 crc kubenswrapper[4835]: E0216 15:14:41.066745 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e597add-6065-48a9-85e1-06530d981505" containerName="oauth-openshift" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.066754 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e597add-6065-48a9-85e1-06530d981505" containerName="oauth-openshift" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.066872 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cb8fe18-6040-4d23-a89b-e338df070e75" containerName="registry" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.066893 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e597add-6065-48a9-85e1-06530d981505" containerName="oauth-openshift" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.067321 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.080139 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb"] Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.147611 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-provider-selection\") pod \"7e597add-6065-48a9-85e1-06530d981505\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.147673 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-service-ca\") pod \"7e597add-6065-48a9-85e1-06530d981505\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.147715 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-session\") pod \"7e597add-6065-48a9-85e1-06530d981505\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.147741 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-ocp-branding-template\") pod \"7e597add-6065-48a9-85e1-06530d981505\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.147804 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-audit-policies\") pod \"7e597add-6065-48a9-85e1-06530d981505\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.147834 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-cliconfig\") pod \"7e597add-6065-48a9-85e1-06530d981505\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.147861 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5pr2\" (UniqueName: \"kubernetes.io/projected/7e597add-6065-48a9-85e1-06530d981505-kube-api-access-h5pr2\") pod \"7e597add-6065-48a9-85e1-06530d981505\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148442 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-error\") pod \"7e597add-6065-48a9-85e1-06530d981505\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148470 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "7e597add-6065-48a9-85e1-06530d981505" (UID: "7e597add-6065-48a9-85e1-06530d981505"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148487 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e597add-6065-48a9-85e1-06530d981505-audit-dir\") pod \"7e597add-6065-48a9-85e1-06530d981505\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148483 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "7e597add-6065-48a9-85e1-06530d981505" (UID: "7e597add-6065-48a9-85e1-06530d981505"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148514 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-serving-cert\") pod \"7e597add-6065-48a9-85e1-06530d981505\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148544 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-router-certs\") pod \"7e597add-6065-48a9-85e1-06530d981505\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148576 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-login\") pod \"7e597add-6065-48a9-85e1-06530d981505\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148597 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-trusted-ca-bundle\") pod \"7e597add-6065-48a9-85e1-06530d981505\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148618 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-idp-0-file-data\") pod \"7e597add-6065-48a9-85e1-06530d981505\" (UID: \"7e597add-6065-48a9-85e1-06530d981505\") " Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148545 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e597add-6065-48a9-85e1-06530d981505-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7e597add-6065-48a9-85e1-06530d981505" (UID: "7e597add-6065-48a9-85e1-06530d981505"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148699 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-router-certs\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148723 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148767 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148797 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148824 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148844 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148869 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148886 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d32fd6e3-0264-466f-baf3-61855ec8d82a-audit-dir\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148902 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-user-template-error\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148919 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d32fd6e3-0264-466f-baf3-61855ec8d82a-audit-policies\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148941 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-service-ca\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148963 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96nrt\" (UniqueName: \"kubernetes.io/projected/d32fd6e3-0264-466f-baf3-61855ec8d82a-kube-api-access-96nrt\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.148984 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-user-template-login\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.149001 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-session\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.149032 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.149042 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.149051 4835 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e597add-6065-48a9-85e1-06530d981505-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.149447 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "7e597add-6065-48a9-85e1-06530d981505" (UID: "7e597add-6065-48a9-85e1-06530d981505"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.149636 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "7e597add-6065-48a9-85e1-06530d981505" (UID: "7e597add-6065-48a9-85e1-06530d981505"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.152631 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e597add-6065-48a9-85e1-06530d981505-kube-api-access-h5pr2" (OuterVolumeSpecName: "kube-api-access-h5pr2") pod "7e597add-6065-48a9-85e1-06530d981505" (UID: "7e597add-6065-48a9-85e1-06530d981505"). InnerVolumeSpecName "kube-api-access-h5pr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.152877 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "7e597add-6065-48a9-85e1-06530d981505" (UID: "7e597add-6065-48a9-85e1-06530d981505"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.152910 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "7e597add-6065-48a9-85e1-06530d981505" (UID: "7e597add-6065-48a9-85e1-06530d981505"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.153011 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "7e597add-6065-48a9-85e1-06530d981505" (UID: "7e597add-6065-48a9-85e1-06530d981505"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.153049 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "7e597add-6065-48a9-85e1-06530d981505" (UID: "7e597add-6065-48a9-85e1-06530d981505"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.153670 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "7e597add-6065-48a9-85e1-06530d981505" (UID: "7e597add-6065-48a9-85e1-06530d981505"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.153687 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "7e597add-6065-48a9-85e1-06530d981505" (UID: "7e597add-6065-48a9-85e1-06530d981505"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.154684 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "7e597add-6065-48a9-85e1-06530d981505" (UID: "7e597add-6065-48a9-85e1-06530d981505"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.154870 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "7e597add-6065-48a9-85e1-06530d981505" (UID: "7e597add-6065-48a9-85e1-06530d981505"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.249890 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.249938 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.249978 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.249999 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250031 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250055 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d32fd6e3-0264-466f-baf3-61855ec8d82a-audit-dir\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250077 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-user-template-error\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250099 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d32fd6e3-0264-466f-baf3-61855ec8d82a-audit-policies\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250121 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-service-ca\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250138 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96nrt\" (UniqueName: \"kubernetes.io/projected/d32fd6e3-0264-466f-baf3-61855ec8d82a-kube-api-access-96nrt\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250165 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-user-template-login\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250186 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-session\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250205 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-router-certs\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250231 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250280 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250295 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250309 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250322 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250339 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250352 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250365 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250377 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250390 4835 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e597add-6065-48a9-85e1-06530d981505-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250401 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5pr2\" (UniqueName: \"kubernetes.io/projected/7e597add-6065-48a9-85e1-06530d981505-kube-api-access-h5pr2\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.250413 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7e597add-6065-48a9-85e1-06530d981505-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.251264 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.251271 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d32fd6e3-0264-466f-baf3-61855ec8d82a-audit-dir\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.251948 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d32fd6e3-0264-466f-baf3-61855ec8d82a-audit-policies\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.252730 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-service-ca\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.253777 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.253935 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.254037 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.254406 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.255212 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-user-template-login\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.255233 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-router-certs\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.256279 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-system-session\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.256872 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.257188 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d32fd6e3-0264-466f-baf3-61855ec8d82a-v4-0-config-user-template-error\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.265105 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96nrt\" (UniqueName: \"kubernetes.io/projected/d32fd6e3-0264-466f-baf3-61855ec8d82a-kube-api-access-96nrt\") pod \"oauth-openshift-76c8bd7ccc-2hzcb\" (UID: \"d32fd6e3-0264-466f-baf3-61855ec8d82a\") " pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.392772 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.620827 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb"] Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.787374 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" event={"ID":"d32fd6e3-0264-466f-baf3-61855ec8d82a","Type":"ContainerStarted","Data":"95b9d02911bb63ff053a433464dd1adcb62462f5250094f88d3b2b14b5beae67"} Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.789165 4835 generic.go:334] "Generic (PLEG): container finished" podID="7e597add-6065-48a9-85e1-06530d981505" containerID="194d8797e9bcdf86ea818e8f9f314f7a196d906934a3747a2626162dab8bce04" exitCode=0 Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.789191 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" event={"ID":"7e597add-6065-48a9-85e1-06530d981505","Type":"ContainerDied","Data":"194d8797e9bcdf86ea818e8f9f314f7a196d906934a3747a2626162dab8bce04"} Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.789206 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" event={"ID":"7e597add-6065-48a9-85e1-06530d981505","Type":"ContainerDied","Data":"6995a85130d423d65ba65203ea719550244dc2bd1015e811886244e761c14305"} Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.789221 4835 scope.go:117] "RemoveContainer" containerID="194d8797e9bcdf86ea818e8f9f314f7a196d906934a3747a2626162dab8bce04" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.789335 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65ff5df46b-p9jlz" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.814736 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-65ff5df46b-p9jlz"] Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.817905 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-65ff5df46b-p9jlz"] Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.819182 4835 scope.go:117] "RemoveContainer" containerID="194d8797e9bcdf86ea818e8f9f314f7a196d906934a3747a2626162dab8bce04" Feb 16 15:14:41 crc kubenswrapper[4835]: E0216 15:14:41.819775 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"194d8797e9bcdf86ea818e8f9f314f7a196d906934a3747a2626162dab8bce04\": container with ID starting with 194d8797e9bcdf86ea818e8f9f314f7a196d906934a3747a2626162dab8bce04 not found: ID does not exist" containerID="194d8797e9bcdf86ea818e8f9f314f7a196d906934a3747a2626162dab8bce04" Feb 16 15:14:41 crc kubenswrapper[4835]: I0216 15:14:41.819818 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"194d8797e9bcdf86ea818e8f9f314f7a196d906934a3747a2626162dab8bce04"} err="failed to get container status \"194d8797e9bcdf86ea818e8f9f314f7a196d906934a3747a2626162dab8bce04\": rpc error: code = NotFound desc = could not find container \"194d8797e9bcdf86ea818e8f9f314f7a196d906934a3747a2626162dab8bce04\": container with ID starting with 194d8797e9bcdf86ea818e8f9f314f7a196d906934a3747a2626162dab8bce04 not found: ID does not exist" Feb 16 15:14:42 crc kubenswrapper[4835]: I0216 15:14:42.798228 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" event={"ID":"d32fd6e3-0264-466f-baf3-61855ec8d82a","Type":"ContainerStarted","Data":"aaccbdb7b478b3063d86feb9767316b9bacd5f546612d058da3f8bd4925b93d8"} Feb 16 15:14:42 crc kubenswrapper[4835]: I0216 15:14:42.798367 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:42 crc kubenswrapper[4835]: I0216 15:14:42.808391 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" Feb 16 15:14:42 crc kubenswrapper[4835]: I0216 15:14:42.830725 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-76c8bd7ccc-2hzcb" podStartSLOduration=27.830695089 podStartE2EDuration="27.830695089s" podCreationTimestamp="2026-02-16 15:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:42.825262954 +0000 UTC m=+432.117255849" watchObservedRunningTime="2026-02-16 15:14:42.830695089 +0000 UTC m=+432.122688014" Feb 16 15:14:43 crc kubenswrapper[4835]: I0216 15:14:43.393380 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e597add-6065-48a9-85e1-06530d981505" path="/var/lib/kubelet/pods/7e597add-6065-48a9-85e1-06530d981505/volumes" Feb 16 15:15:00 crc kubenswrapper[4835]: I0216 15:15:00.201519 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22"] Feb 16 15:15:00 crc kubenswrapper[4835]: I0216 15:15:00.203092 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22" Feb 16 15:15:00 crc kubenswrapper[4835]: I0216 15:15:00.206848 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 15:15:00 crc kubenswrapper[4835]: I0216 15:15:00.207312 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 15:15:00 crc kubenswrapper[4835]: I0216 15:15:00.220415 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22"] Feb 16 15:15:00 crc kubenswrapper[4835]: I0216 15:15:00.357608 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rkgg\" (UniqueName: \"kubernetes.io/projected/79abfe47-062a-40f9-a2de-61850e9711d7-kube-api-access-9rkgg\") pod \"collect-profiles-29520915-gtd22\" (UID: \"79abfe47-062a-40f9-a2de-61850e9711d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22" Feb 16 15:15:00 crc kubenswrapper[4835]: I0216 15:15:00.357819 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79abfe47-062a-40f9-a2de-61850e9711d7-secret-volume\") pod \"collect-profiles-29520915-gtd22\" (UID: \"79abfe47-062a-40f9-a2de-61850e9711d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22" Feb 16 15:15:00 crc kubenswrapper[4835]: I0216 15:15:00.357861 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79abfe47-062a-40f9-a2de-61850e9711d7-config-volume\") pod \"collect-profiles-29520915-gtd22\" (UID: \"79abfe47-062a-40f9-a2de-61850e9711d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22" Feb 16 15:15:00 crc kubenswrapper[4835]: I0216 15:15:00.458575 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rkgg\" (UniqueName: \"kubernetes.io/projected/79abfe47-062a-40f9-a2de-61850e9711d7-kube-api-access-9rkgg\") pod \"collect-profiles-29520915-gtd22\" (UID: \"79abfe47-062a-40f9-a2de-61850e9711d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22" Feb 16 15:15:00 crc kubenswrapper[4835]: I0216 15:15:00.458679 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79abfe47-062a-40f9-a2de-61850e9711d7-secret-volume\") pod \"collect-profiles-29520915-gtd22\" (UID: \"79abfe47-062a-40f9-a2de-61850e9711d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22" Feb 16 15:15:00 crc kubenswrapper[4835]: I0216 15:15:00.458720 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79abfe47-062a-40f9-a2de-61850e9711d7-config-volume\") pod \"collect-profiles-29520915-gtd22\" (UID: \"79abfe47-062a-40f9-a2de-61850e9711d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22" Feb 16 15:15:00 crc kubenswrapper[4835]: I0216 15:15:00.459957 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79abfe47-062a-40f9-a2de-61850e9711d7-config-volume\") pod \"collect-profiles-29520915-gtd22\" (UID: \"79abfe47-062a-40f9-a2de-61850e9711d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22" Feb 16 15:15:00 crc kubenswrapper[4835]: I0216 15:15:00.474145 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79abfe47-062a-40f9-a2de-61850e9711d7-secret-volume\") pod \"collect-profiles-29520915-gtd22\" (UID: \"79abfe47-062a-40f9-a2de-61850e9711d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22" Feb 16 15:15:00 crc kubenswrapper[4835]: I0216 15:15:00.493240 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rkgg\" (UniqueName: \"kubernetes.io/projected/79abfe47-062a-40f9-a2de-61850e9711d7-kube-api-access-9rkgg\") pod \"collect-profiles-29520915-gtd22\" (UID: \"79abfe47-062a-40f9-a2de-61850e9711d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22" Feb 16 15:15:00 crc kubenswrapper[4835]: I0216 15:15:00.526523 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22" Feb 16 15:15:01 crc kubenswrapper[4835]: I0216 15:15:01.014928 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22"] Feb 16 15:15:01 crc kubenswrapper[4835]: I0216 15:15:01.942734 4835 generic.go:334] "Generic (PLEG): container finished" podID="79abfe47-062a-40f9-a2de-61850e9711d7" containerID="16cd500f1d79347b86d347f75314b510623fc28deccfe416ca371aaca8f056e2" exitCode=0 Feb 16 15:15:01 crc kubenswrapper[4835]: I0216 15:15:01.942838 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22" event={"ID":"79abfe47-062a-40f9-a2de-61850e9711d7","Type":"ContainerDied","Data":"16cd500f1d79347b86d347f75314b510623fc28deccfe416ca371aaca8f056e2"} Feb 16 15:15:01 crc kubenswrapper[4835]: I0216 15:15:01.943138 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22" event={"ID":"79abfe47-062a-40f9-a2de-61850e9711d7","Type":"ContainerStarted","Data":"4cf43d1b73630e7c89e3a2ad3d4d8c469ce98bed738788949479686e4bc23701"} Feb 16 15:15:03 crc kubenswrapper[4835]: I0216 15:15:03.259168 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22" Feb 16 15:15:03 crc kubenswrapper[4835]: I0216 15:15:03.404888 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rkgg\" (UniqueName: \"kubernetes.io/projected/79abfe47-062a-40f9-a2de-61850e9711d7-kube-api-access-9rkgg\") pod \"79abfe47-062a-40f9-a2de-61850e9711d7\" (UID: \"79abfe47-062a-40f9-a2de-61850e9711d7\") " Feb 16 15:15:03 crc kubenswrapper[4835]: I0216 15:15:03.405054 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79abfe47-062a-40f9-a2de-61850e9711d7-secret-volume\") pod \"79abfe47-062a-40f9-a2de-61850e9711d7\" (UID: \"79abfe47-062a-40f9-a2de-61850e9711d7\") " Feb 16 15:15:03 crc kubenswrapper[4835]: I0216 15:15:03.405157 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79abfe47-062a-40f9-a2de-61850e9711d7-config-volume\") pod \"79abfe47-062a-40f9-a2de-61850e9711d7\" (UID: \"79abfe47-062a-40f9-a2de-61850e9711d7\") " Feb 16 15:15:03 crc kubenswrapper[4835]: I0216 15:15:03.405838 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79abfe47-062a-40f9-a2de-61850e9711d7-config-volume" (OuterVolumeSpecName: "config-volume") pod "79abfe47-062a-40f9-a2de-61850e9711d7" (UID: "79abfe47-062a-40f9-a2de-61850e9711d7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4835]: I0216 15:15:03.413733 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79abfe47-062a-40f9-a2de-61850e9711d7-kube-api-access-9rkgg" (OuterVolumeSpecName: "kube-api-access-9rkgg") pod "79abfe47-062a-40f9-a2de-61850e9711d7" (UID: "79abfe47-062a-40f9-a2de-61850e9711d7"). InnerVolumeSpecName "kube-api-access-9rkgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4835]: I0216 15:15:03.415096 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79abfe47-062a-40f9-a2de-61850e9711d7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "79abfe47-062a-40f9-a2de-61850e9711d7" (UID: "79abfe47-062a-40f9-a2de-61850e9711d7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4835]: I0216 15:15:03.506515 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rkgg\" (UniqueName: \"kubernetes.io/projected/79abfe47-062a-40f9-a2de-61850e9711d7-kube-api-access-9rkgg\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4835]: I0216 15:15:03.506598 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/79abfe47-062a-40f9-a2de-61850e9711d7-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4835]: I0216 15:15:03.506619 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79abfe47-062a-40f9-a2de-61850e9711d7-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4835]: I0216 15:15:03.958042 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22" event={"ID":"79abfe47-062a-40f9-a2de-61850e9711d7","Type":"ContainerDied","Data":"4cf43d1b73630e7c89e3a2ad3d4d8c469ce98bed738788949479686e4bc23701"} Feb 16 15:15:03 crc kubenswrapper[4835]: I0216 15:15:03.958110 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cf43d1b73630e7c89e3a2ad3d4d8c469ce98bed738788949479686e4bc23701" Feb 16 15:15:03 crc kubenswrapper[4835]: I0216 15:15:03.958114 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22" Feb 16 15:16:18 crc kubenswrapper[4835]: I0216 15:16:18.587470 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:16:18 crc kubenswrapper[4835]: I0216 15:16:18.587966 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:16:48 crc kubenswrapper[4835]: I0216 15:16:48.587364 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:16:48 crc kubenswrapper[4835]: I0216 15:16:48.589288 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.185861 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq"] Feb 16 15:17:07 crc kubenswrapper[4835]: E0216 15:17:07.186668 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79abfe47-062a-40f9-a2de-61850e9711d7" containerName="collect-profiles" Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.186691 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="79abfe47-062a-40f9-a2de-61850e9711d7" containerName="collect-profiles" Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.186846 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="79abfe47-062a-40f9-a2de-61850e9711d7" containerName="collect-profiles" Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.187978 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.190706 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.197008 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq"] Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.347389 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5e608e7-9be5-4109-afef-3f02146e5dbb-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq\" (UID: \"b5e608e7-9be5-4109-afef-3f02146e5dbb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.347486 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5e608e7-9be5-4109-afef-3f02146e5dbb-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq\" (UID: \"b5e608e7-9be5-4109-afef-3f02146e5dbb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.347512 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv89x\" (UniqueName: \"kubernetes.io/projected/b5e608e7-9be5-4109-afef-3f02146e5dbb-kube-api-access-bv89x\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq\" (UID: \"b5e608e7-9be5-4109-afef-3f02146e5dbb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.448860 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5e608e7-9be5-4109-afef-3f02146e5dbb-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq\" (UID: \"b5e608e7-9be5-4109-afef-3f02146e5dbb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.448951 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5e608e7-9be5-4109-afef-3f02146e5dbb-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq\" (UID: \"b5e608e7-9be5-4109-afef-3f02146e5dbb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.448984 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv89x\" (UniqueName: \"kubernetes.io/projected/b5e608e7-9be5-4109-afef-3f02146e5dbb-kube-api-access-bv89x\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq\" (UID: \"b5e608e7-9be5-4109-afef-3f02146e5dbb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.449681 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5e608e7-9be5-4109-afef-3f02146e5dbb-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq\" (UID: \"b5e608e7-9be5-4109-afef-3f02146e5dbb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.449759 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5e608e7-9be5-4109-afef-3f02146e5dbb-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq\" (UID: \"b5e608e7-9be5-4109-afef-3f02146e5dbb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.468488 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv89x\" (UniqueName: \"kubernetes.io/projected/b5e608e7-9be5-4109-afef-3f02146e5dbb-kube-api-access-bv89x\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq\" (UID: \"b5e608e7-9be5-4109-afef-3f02146e5dbb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.513458 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.706002 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq"] Feb 16 15:17:07 crc kubenswrapper[4835]: I0216 15:17:07.777363 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" event={"ID":"b5e608e7-9be5-4109-afef-3f02146e5dbb","Type":"ContainerStarted","Data":"1bcb2b62e55d1ecb09dd76c00669a19d062594309e36089d0a2feb23bfc57fde"} Feb 16 15:17:08 crc kubenswrapper[4835]: I0216 15:17:08.785032 4835 generic.go:334] "Generic (PLEG): container finished" podID="b5e608e7-9be5-4109-afef-3f02146e5dbb" containerID="fc395f9208655b1a8cadf93cf0d505ddbd47d2e6340f27c638c492b983e18ade" exitCode=0 Feb 16 15:17:08 crc kubenswrapper[4835]: I0216 15:17:08.785139 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" event={"ID":"b5e608e7-9be5-4109-afef-3f02146e5dbb","Type":"ContainerDied","Data":"fc395f9208655b1a8cadf93cf0d505ddbd47d2e6340f27c638c492b983e18ade"} Feb 16 15:17:08 crc kubenswrapper[4835]: I0216 15:17:08.786537 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:17:09 crc kubenswrapper[4835]: I0216 15:17:09.812431 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" event={"ID":"b5e608e7-9be5-4109-afef-3f02146e5dbb","Type":"ContainerStarted","Data":"3087d3b7c0832a347bcefcb71bcc87b57ee0a4e80c52ecb41cb8a70f3c3049bd"} Feb 16 15:17:10 crc kubenswrapper[4835]: I0216 15:17:10.821938 4835 generic.go:334] "Generic (PLEG): container finished" podID="b5e608e7-9be5-4109-afef-3f02146e5dbb" containerID="3087d3b7c0832a347bcefcb71bcc87b57ee0a4e80c52ecb41cb8a70f3c3049bd" exitCode=0 Feb 16 15:17:10 crc kubenswrapper[4835]: I0216 15:17:10.822045 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" event={"ID":"b5e608e7-9be5-4109-afef-3f02146e5dbb","Type":"ContainerDied","Data":"3087d3b7c0832a347bcefcb71bcc87b57ee0a4e80c52ecb41cb8a70f3c3049bd"} Feb 16 15:17:11 crc kubenswrapper[4835]: I0216 15:17:11.830433 4835 generic.go:334] "Generic (PLEG): container finished" podID="b5e608e7-9be5-4109-afef-3f02146e5dbb" containerID="6a58ac39dded6e7cdc9bb43abce97ceaf6fd34fd55388014851f4666fcf16ff6" exitCode=0 Feb 16 15:17:11 crc kubenswrapper[4835]: I0216 15:17:11.830633 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" event={"ID":"b5e608e7-9be5-4109-afef-3f02146e5dbb","Type":"ContainerDied","Data":"6a58ac39dded6e7cdc9bb43abce97ceaf6fd34fd55388014851f4666fcf16ff6"} Feb 16 15:17:13 crc kubenswrapper[4835]: I0216 15:17:13.171497 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" Feb 16 15:17:13 crc kubenswrapper[4835]: I0216 15:17:13.365759 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5e608e7-9be5-4109-afef-3f02146e5dbb-bundle\") pod \"b5e608e7-9be5-4109-afef-3f02146e5dbb\" (UID: \"b5e608e7-9be5-4109-afef-3f02146e5dbb\") " Feb 16 15:17:13 crc kubenswrapper[4835]: I0216 15:17:13.365805 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv89x\" (UniqueName: \"kubernetes.io/projected/b5e608e7-9be5-4109-afef-3f02146e5dbb-kube-api-access-bv89x\") pod \"b5e608e7-9be5-4109-afef-3f02146e5dbb\" (UID: \"b5e608e7-9be5-4109-afef-3f02146e5dbb\") " Feb 16 15:17:13 crc kubenswrapper[4835]: I0216 15:17:13.365884 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5e608e7-9be5-4109-afef-3f02146e5dbb-util\") pod \"b5e608e7-9be5-4109-afef-3f02146e5dbb\" (UID: \"b5e608e7-9be5-4109-afef-3f02146e5dbb\") " Feb 16 15:17:13 crc kubenswrapper[4835]: I0216 15:17:13.369946 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5e608e7-9be5-4109-afef-3f02146e5dbb-bundle" (OuterVolumeSpecName: "bundle") pod "b5e608e7-9be5-4109-afef-3f02146e5dbb" (UID: "b5e608e7-9be5-4109-afef-3f02146e5dbb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:13 crc kubenswrapper[4835]: I0216 15:17:13.375039 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5e608e7-9be5-4109-afef-3f02146e5dbb-kube-api-access-bv89x" (OuterVolumeSpecName: "kube-api-access-bv89x") pod "b5e608e7-9be5-4109-afef-3f02146e5dbb" (UID: "b5e608e7-9be5-4109-afef-3f02146e5dbb"). InnerVolumeSpecName "kube-api-access-bv89x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:13 crc kubenswrapper[4835]: I0216 15:17:13.377999 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5e608e7-9be5-4109-afef-3f02146e5dbb-util" (OuterVolumeSpecName: "util") pod "b5e608e7-9be5-4109-afef-3f02146e5dbb" (UID: "b5e608e7-9be5-4109-afef-3f02146e5dbb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:17:13 crc kubenswrapper[4835]: I0216 15:17:13.467769 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5e608e7-9be5-4109-afef-3f02146e5dbb-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:13 crc kubenswrapper[4835]: I0216 15:17:13.468056 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5e608e7-9be5-4109-afef-3f02146e5dbb-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:13 crc kubenswrapper[4835]: I0216 15:17:13.468179 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv89x\" (UniqueName: \"kubernetes.io/projected/b5e608e7-9be5-4109-afef-3f02146e5dbb-kube-api-access-bv89x\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:13 crc kubenswrapper[4835]: I0216 15:17:13.844521 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" event={"ID":"b5e608e7-9be5-4109-afef-3f02146e5dbb","Type":"ContainerDied","Data":"1bcb2b62e55d1ecb09dd76c00669a19d062594309e36089d0a2feb23bfc57fde"} Feb 16 15:17:13 crc kubenswrapper[4835]: I0216 15:17:13.844830 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bcb2b62e55d1ecb09dd76c00669a19d062594309e36089d0a2feb23bfc57fde" Feb 16 15:17:13 crc kubenswrapper[4835]: I0216 15:17:13.844622 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq" Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.548404 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6nwz6"] Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.549182 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovn-controller" containerID="cri-o://3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6" gracePeriod=30 Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.549265 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="northd" containerID="cri-o://271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1" gracePeriod=30 Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.549316 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="kube-rbac-proxy-node" containerID="cri-o://32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211" gracePeriod=30 Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.549362 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovn-acl-logging" containerID="cri-o://d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571" gracePeriod=30 Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.549495 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="sbdb" containerID="cri-o://6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9" gracePeriod=30 Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.549551 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="nbdb" containerID="cri-o://8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48" gracePeriod=30 Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.549301 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb" gracePeriod=30 Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.586559 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.586623 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.586673 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.587392 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a28f0d6525f9971d77742b65377602c37eac99fbd17ba56f4ecbef96e8a8ccd"} pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.587465 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" containerID="cri-o://1a28f0d6525f9971d77742b65377602c37eac99fbd17ba56f4ecbef96e8a8ccd" gracePeriod=600 Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.593134 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovnkube-controller" containerID="cri-o://bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a" gracePeriod=30 Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.871331 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gncxk_36a4edb0-ce1a-4b59-b1f9-f5b43255de2d/kube-multus/2.log" Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.871895 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gncxk_36a4edb0-ce1a-4b59-b1f9-f5b43255de2d/kube-multus/1.log" Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.871939 4835 generic.go:334] "Generic (PLEG): container finished" podID="36a4edb0-ce1a-4b59-b1f9-f5b43255de2d" containerID="98f8e6d7b44084a40632591b1774ef5147c6f4e787ac6fb60321e2810fa9ec35" exitCode=2 Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.871991 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gncxk" event={"ID":"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d","Type":"ContainerDied","Data":"98f8e6d7b44084a40632591b1774ef5147c6f4e787ac6fb60321e2810fa9ec35"} Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.872021 4835 scope.go:117] "RemoveContainer" containerID="7edb148cc65ee2949251ab04a07a1827852b3de552110178d05456e30d5a8d04" Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.872500 4835 scope.go:117] "RemoveContainer" containerID="98f8e6d7b44084a40632591b1774ef5147c6f4e787ac6fb60321e2810fa9ec35" Feb 16 15:17:18 crc kubenswrapper[4835]: E0216 15:17:18.872722 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gncxk_openshift-multus(36a4edb0-ce1a-4b59-b1f9-f5b43255de2d)\"" pod="openshift-multus/multus-gncxk" podUID="36a4edb0-ce1a-4b59-b1f9-f5b43255de2d" Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.876313 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovnkube-controller/3.log" Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.878356 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovn-acl-logging/0.log" Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.878786 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovn-controller/0.log" Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.879119 4835 generic.go:334] "Generic (PLEG): container finished" podID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerID="bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a" exitCode=0 Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.879136 4835 generic.go:334] "Generic (PLEG): container finished" podID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerID="6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9" exitCode=0 Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.879145 4835 generic.go:334] "Generic (PLEG): container finished" podID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerID="3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb" exitCode=0 Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.879151 4835 generic.go:334] "Generic (PLEG): container finished" podID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerID="32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211" exitCode=0 Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.879157 4835 generic.go:334] "Generic (PLEG): container finished" podID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerID="d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571" exitCode=143 Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.879164 4835 generic.go:334] "Generic (PLEG): container finished" podID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerID="3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6" exitCode=143 Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.879200 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerDied","Data":"bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a"} Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.879224 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerDied","Data":"6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9"} Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.879235 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerDied","Data":"3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb"} Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.879243 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerDied","Data":"32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211"} Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.879252 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerDied","Data":"d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571"} Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.879263 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerDied","Data":"3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6"} Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.884570 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerDied","Data":"1a28f0d6525f9971d77742b65377602c37eac99fbd17ba56f4ecbef96e8a8ccd"} Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.884512 4835 generic.go:334] "Generic (PLEG): container finished" podID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerID="1a28f0d6525f9971d77742b65377602c37eac99fbd17ba56f4ecbef96e8a8ccd" exitCode=0 Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.884869 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerStarted","Data":"73c350b3ac02f46ce7b27cf1db88e1f50effcba02c6c2d6096a643a3b0037668"} Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.913037 4835 scope.go:117] "RemoveContainer" containerID="fd0d1ff47e054c8a5ca08b6752f11f079e0fbbc7a4c51f3647a2e95da99f6fe1" Feb 16 15:17:18 crc kubenswrapper[4835]: I0216 15:17:18.951707 4835 scope.go:117] "RemoveContainer" containerID="c55d3f5b42809c4991ad19df6589021934ee6a2792c9d5ee4984082ec22f35aa" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.193075 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovn-acl-logging/0.log" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.193570 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovn-controller/0.log" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.194133 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256070 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-var-lib-cni-networks-ovn-kubernetes\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256425 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-openvswitch\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256456 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-env-overrides\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256481 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-slash\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256510 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovnkube-config\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256522 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-cni-netd\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256566 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovnkube-script-lib\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256583 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-node-log\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256601 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-log-socket\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256627 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-etc-openvswitch\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256642 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-systemd\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256656 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-run-ovn-kubernetes\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256677 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovn-node-metrics-cert\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256700 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-kubelet\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256715 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-systemd-units\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256734 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrwvw\" (UniqueName: \"kubernetes.io/projected/9a790a22-cc2f-414e-b43b-fd6df80d19da-kube-api-access-vrwvw\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256751 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-run-netns\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256763 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-var-lib-openvswitch\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256786 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-cni-bin\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256806 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-ovn\") pod \"9a790a22-cc2f-414e-b43b-fd6df80d19da\" (UID: \"9a790a22-cc2f-414e-b43b-fd6df80d19da\") " Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.256295 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.257043 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.257079 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.257388 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.257417 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-slash" (OuterVolumeSpecName: "host-slash") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.257650 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.257678 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.257879 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.257906 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-node-log" (OuterVolumeSpecName: "node-log") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.257922 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-log-socket" (OuterVolumeSpecName: "log-socket") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.257938 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.258309 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.258569 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.258598 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.258619 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.258646 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.258665 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.267613 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a790a22-cc2f-414e-b43b-fd6df80d19da-kube-api-access-vrwvw" (OuterVolumeSpecName: "kube-api-access-vrwvw") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "kube-api-access-vrwvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.268082 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.285011 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "9a790a22-cc2f-414e-b43b-fd6df80d19da" (UID: "9a790a22-cc2f-414e-b43b-fd6df80d19da"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304043 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xtwj4"] Feb 16 15:17:19 crc kubenswrapper[4835]: E0216 15:17:19.304248 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovnkube-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304264 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovnkube-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: E0216 15:17:19.304274 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="kubecfg-setup" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304280 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="kubecfg-setup" Feb 16 15:17:19 crc kubenswrapper[4835]: E0216 15:17:19.304289 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="kube-rbac-proxy-node" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304295 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="kube-rbac-proxy-node" Feb 16 15:17:19 crc kubenswrapper[4835]: E0216 15:17:19.304303 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovnkube-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304309 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovnkube-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: E0216 15:17:19.304318 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304323 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 15:17:19 crc kubenswrapper[4835]: E0216 15:17:19.304331 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e608e7-9be5-4109-afef-3f02146e5dbb" containerName="extract" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304337 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e608e7-9be5-4109-afef-3f02146e5dbb" containerName="extract" Feb 16 15:17:19 crc kubenswrapper[4835]: E0216 15:17:19.304345 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e608e7-9be5-4109-afef-3f02146e5dbb" containerName="pull" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304351 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e608e7-9be5-4109-afef-3f02146e5dbb" containerName="pull" Feb 16 15:17:19 crc kubenswrapper[4835]: E0216 15:17:19.304357 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovn-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304363 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovn-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: E0216 15:17:19.304371 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e608e7-9be5-4109-afef-3f02146e5dbb" containerName="util" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304378 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e608e7-9be5-4109-afef-3f02146e5dbb" containerName="util" Feb 16 15:17:19 crc kubenswrapper[4835]: E0216 15:17:19.304384 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovnkube-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304390 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovnkube-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: E0216 15:17:19.304397 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="northd" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304402 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="northd" Feb 16 15:17:19 crc kubenswrapper[4835]: E0216 15:17:19.304409 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="nbdb" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304415 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="nbdb" Feb 16 15:17:19 crc kubenswrapper[4835]: E0216 15:17:19.304422 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovn-acl-logging" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304428 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovn-acl-logging" Feb 16 15:17:19 crc kubenswrapper[4835]: E0216 15:17:19.304438 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="sbdb" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304443 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="sbdb" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304541 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="northd" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304549 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovnkube-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304555 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovnkube-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304564 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e608e7-9be5-4109-afef-3f02146e5dbb" containerName="extract" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304569 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="kube-rbac-proxy-node" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304576 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304585 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovnkube-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304591 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="sbdb" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304604 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovnkube-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304613 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovn-acl-logging" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304621 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovn-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304631 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="nbdb" Feb 16 15:17:19 crc kubenswrapper[4835]: E0216 15:17:19.304722 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovnkube-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304731 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovnkube-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: E0216 15:17:19.304746 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovnkube-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304753 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovnkube-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.304847 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerName="ovnkube-controller" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.306292 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.357494 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-run-ovn-kubernetes\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.357611 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-run-systemd\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.357633 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4c6c937-a46a-4dff-a306-27c6430430cd-env-overrides\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.357690 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.357709 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-etc-openvswitch\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.357724 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-run-netns\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.357774 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-run-ovn\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.357799 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-cni-netd\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.357848 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-systemd-units\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.357863 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-var-lib-openvswitch\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.357919 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-slash\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.357940 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mx7b\" (UniqueName: \"kubernetes.io/projected/b4c6c937-a46a-4dff-a306-27c6430430cd-kube-api-access-7mx7b\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.357956 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-run-openvswitch\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358011 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4c6c937-a46a-4dff-a306-27c6430430cd-ovnkube-config\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358027 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-kubelet\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358045 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4c6c937-a46a-4dff-a306-27c6430430cd-ovn-node-metrics-cert\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358091 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-log-socket\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358108 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-node-log\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358153 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-cni-bin\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358219 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b4c6c937-a46a-4dff-a306-27c6430430cd-ovnkube-script-lib\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358260 4835 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358300 4835 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358309 4835 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358318 4835 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358326 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358335 4835 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358343 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358350 4835 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358358 4835 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358365 4835 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358374 4835 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358384 4835 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358392 4835 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a790a22-cc2f-414e-b43b-fd6df80d19da-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358400 4835 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358407 4835 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358417 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrwvw\" (UniqueName: \"kubernetes.io/projected/9a790a22-cc2f-414e-b43b-fd6df80d19da-kube-api-access-vrwvw\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358426 4835 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358434 4835 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358441 4835 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.358449 4835 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a790a22-cc2f-414e-b43b-fd6df80d19da-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459290 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-systemd-units\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459337 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-var-lib-openvswitch\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459358 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-slash\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459377 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mx7b\" (UniqueName: \"kubernetes.io/projected/b4c6c937-a46a-4dff-a306-27c6430430cd-kube-api-access-7mx7b\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459392 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-run-openvswitch\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459408 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4c6c937-a46a-4dff-a306-27c6430430cd-ovnkube-config\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459433 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-kubelet\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459453 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4c6c937-a46a-4dff-a306-27c6430430cd-ovn-node-metrics-cert\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459476 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-log-socket\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459490 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-node-log\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459506 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-cni-bin\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459572 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b4c6c937-a46a-4dff-a306-27c6430430cd-ovnkube-script-lib\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459591 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-run-ovn-kubernetes\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459609 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-run-systemd\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459625 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4c6c937-a46a-4dff-a306-27c6430430cd-env-overrides\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459646 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459663 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-etc-openvswitch\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459677 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-run-netns\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459712 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-run-ovn\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459752 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-cni-netd\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459816 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-cni-netd\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459853 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-systemd-units\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459872 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-var-lib-openvswitch\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.459891 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-slash\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.460144 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-run-openvswitch\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.460501 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-run-netns\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.460579 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-log-socket\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.460581 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-run-ovn-kubernetes\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.460605 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-run-systemd\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.460614 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.460633 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-etc-openvswitch\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.460750 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b4c6c937-a46a-4dff-a306-27c6430430cd-ovnkube-config\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.460782 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-cni-bin\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.460785 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-node-log\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.461101 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-host-kubelet\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.460535 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b4c6c937-a46a-4dff-a306-27c6430430cd-run-ovn\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.461570 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b4c6c937-a46a-4dff-a306-27c6430430cd-env-overrides\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.461896 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b4c6c937-a46a-4dff-a306-27c6430430cd-ovnkube-script-lib\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.464917 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4c6c937-a46a-4dff-a306-27c6430430cd-ovn-node-metrics-cert\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.505035 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mx7b\" (UniqueName: \"kubernetes.io/projected/b4c6c937-a46a-4dff-a306-27c6430430cd-kube-api-access-7mx7b\") pod \"ovnkube-node-xtwj4\" (UID: \"b4c6c937-a46a-4dff-a306-27c6430430cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.619021 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:19 crc kubenswrapper[4835]: W0216 15:17:19.633877 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4c6c937_a46a_4dff_a306_27c6430430cd.slice/crio-343ced05a858ff8709f47c33edf79d8825b7610906addbf401ad8b3c4a503d98 WatchSource:0}: Error finding container 343ced05a858ff8709f47c33edf79d8825b7610906addbf401ad8b3c4a503d98: Status 404 returned error can't find the container with id 343ced05a858ff8709f47c33edf79d8825b7610906addbf401ad8b3c4a503d98 Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.890470 4835 generic.go:334] "Generic (PLEG): container finished" podID="b4c6c937-a46a-4dff-a306-27c6430430cd" containerID="9defa3f207a352cf4782c1e76bdce3ea540d6e27561c4b50ca09799484ae3da3" exitCode=0 Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.890554 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" event={"ID":"b4c6c937-a46a-4dff-a306-27c6430430cd","Type":"ContainerDied","Data":"9defa3f207a352cf4782c1e76bdce3ea540d6e27561c4b50ca09799484ae3da3"} Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.890828 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" event={"ID":"b4c6c937-a46a-4dff-a306-27c6430430cd","Type":"ContainerStarted","Data":"343ced05a858ff8709f47c33edf79d8825b7610906addbf401ad8b3c4a503d98"} Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.895033 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovn-acl-logging/0.log" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.895507 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nwz6_9a790a22-cc2f-414e-b43b-fd6df80d19da/ovn-controller/0.log" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.895871 4835 generic.go:334] "Generic (PLEG): container finished" podID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerID="8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48" exitCode=0 Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.895901 4835 generic.go:334] "Generic (PLEG): container finished" podID="9a790a22-cc2f-414e-b43b-fd6df80d19da" containerID="271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1" exitCode=0 Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.895961 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerDied","Data":"8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48"} Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.895967 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.895986 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerDied","Data":"271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1"} Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.895999 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nwz6" event={"ID":"9a790a22-cc2f-414e-b43b-fd6df80d19da","Type":"ContainerDied","Data":"f53f650d0776d32d3427dbae6c234ce3becf1abd6f19c6de452bfb0da8df7312"} Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.896015 4835 scope.go:117] "RemoveContainer" containerID="bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.950781 4835 scope.go:117] "RemoveContainer" containerID="6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.954580 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gncxk_36a4edb0-ce1a-4b59-b1f9-f5b43255de2d/kube-multus/2.log" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.991238 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6nwz6"] Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.997712 4835 scope.go:117] "RemoveContainer" containerID="8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48" Feb 16 15:17:19 crc kubenswrapper[4835]: I0216 15:17:19.999164 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6nwz6"] Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.043848 4835 scope.go:117] "RemoveContainer" containerID="271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.070639 4835 scope.go:117] "RemoveContainer" containerID="3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.082768 4835 scope.go:117] "RemoveContainer" containerID="32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.097690 4835 scope.go:117] "RemoveContainer" containerID="d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.118885 4835 scope.go:117] "RemoveContainer" containerID="3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.139952 4835 scope.go:117] "RemoveContainer" containerID="d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.166041 4835 scope.go:117] "RemoveContainer" containerID="bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a" Feb 16 15:17:20 crc kubenswrapper[4835]: E0216 15:17:20.167326 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a\": container with ID starting with bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a not found: ID does not exist" containerID="bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.167372 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a"} err="failed to get container status \"bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a\": rpc error: code = NotFound desc = could not find container \"bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a\": container with ID starting with bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.167399 4835 scope.go:117] "RemoveContainer" containerID="6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9" Feb 16 15:17:20 crc kubenswrapper[4835]: E0216 15:17:20.172675 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\": container with ID starting with 6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9 not found: ID does not exist" containerID="6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.172719 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9"} err="failed to get container status \"6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\": rpc error: code = NotFound desc = could not find container \"6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\": container with ID starting with 6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9 not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.172752 4835 scope.go:117] "RemoveContainer" containerID="8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48" Feb 16 15:17:20 crc kubenswrapper[4835]: E0216 15:17:20.174818 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\": container with ID starting with 8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48 not found: ID does not exist" containerID="8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.174855 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48"} err="failed to get container status \"8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\": rpc error: code = NotFound desc = could not find container \"8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\": container with ID starting with 8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48 not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.174879 4835 scope.go:117] "RemoveContainer" containerID="271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1" Feb 16 15:17:20 crc kubenswrapper[4835]: E0216 15:17:20.178896 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\": container with ID starting with 271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1 not found: ID does not exist" containerID="271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.178937 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1"} err="failed to get container status \"271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\": rpc error: code = NotFound desc = could not find container \"271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\": container with ID starting with 271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1 not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.178963 4835 scope.go:117] "RemoveContainer" containerID="3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb" Feb 16 15:17:20 crc kubenswrapper[4835]: E0216 15:17:20.179787 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\": container with ID starting with 3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb not found: ID does not exist" containerID="3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.179824 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb"} err="failed to get container status \"3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\": rpc error: code = NotFound desc = could not find container \"3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\": container with ID starting with 3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.179854 4835 scope.go:117] "RemoveContainer" containerID="32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211" Feb 16 15:17:20 crc kubenswrapper[4835]: E0216 15:17:20.183775 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\": container with ID starting with 32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211 not found: ID does not exist" containerID="32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.183816 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211"} err="failed to get container status \"32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\": rpc error: code = NotFound desc = could not find container \"32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\": container with ID starting with 32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211 not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.183841 4835 scope.go:117] "RemoveContainer" containerID="d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571" Feb 16 15:17:20 crc kubenswrapper[4835]: E0216 15:17:20.184580 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\": container with ID starting with d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571 not found: ID does not exist" containerID="d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.184621 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571"} err="failed to get container status \"d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\": rpc error: code = NotFound desc = could not find container \"d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\": container with ID starting with d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571 not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.184651 4835 scope.go:117] "RemoveContainer" containerID="3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6" Feb 16 15:17:20 crc kubenswrapper[4835]: E0216 15:17:20.189007 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\": container with ID starting with 3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6 not found: ID does not exist" containerID="3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.189039 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6"} err="failed to get container status \"3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\": rpc error: code = NotFound desc = could not find container \"3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\": container with ID starting with 3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6 not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.189058 4835 scope.go:117] "RemoveContainer" containerID="d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba" Feb 16 15:17:20 crc kubenswrapper[4835]: E0216 15:17:20.189447 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\": container with ID starting with d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba not found: ID does not exist" containerID="d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.189485 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba"} err="failed to get container status \"d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\": rpc error: code = NotFound desc = could not find container \"d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\": container with ID starting with d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.189508 4835 scope.go:117] "RemoveContainer" containerID="bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.189788 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a"} err="failed to get container status \"bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a\": rpc error: code = NotFound desc = could not find container \"bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a\": container with ID starting with bfeddfc4ff52173fae24e9e7615ab2a7f0711f8a61c032d2192a504bb0911e6a not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.189817 4835 scope.go:117] "RemoveContainer" containerID="6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.190130 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9"} err="failed to get container status \"6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\": rpc error: code = NotFound desc = could not find container \"6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9\": container with ID starting with 6b809be5d862d5de3b7b37933f40cef286d656f9676bcaa1b04df73986c10ae9 not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.190152 4835 scope.go:117] "RemoveContainer" containerID="8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.190395 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48"} err="failed to get container status \"8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\": rpc error: code = NotFound desc = could not find container \"8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48\": container with ID starting with 8c4b3c09d6a26217a05806df5a24ca7739e553ac69e996497a5de1b5d6e91e48 not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.190442 4835 scope.go:117] "RemoveContainer" containerID="271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.191260 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1"} err="failed to get container status \"271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\": rpc error: code = NotFound desc = could not find container \"271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1\": container with ID starting with 271aa1eb9efb2d99fbda953fc3fe7c9de95a6fc68b378923b145e9fa065935e1 not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.191287 4835 scope.go:117] "RemoveContainer" containerID="3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.191520 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb"} err="failed to get container status \"3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\": rpc error: code = NotFound desc = could not find container \"3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb\": container with ID starting with 3bc5a4a4d3e6f39811e9f982e772f373c91224dcd63a3fed5d302cb60e90d7cb not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.191560 4835 scope.go:117] "RemoveContainer" containerID="32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.192304 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211"} err="failed to get container status \"32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\": rpc error: code = NotFound desc = could not find container \"32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211\": container with ID starting with 32168b1a4f20dc3a6fe279966d7b08bc5af54dc688e68bc3c32a45038a1da211 not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.192329 4835 scope.go:117] "RemoveContainer" containerID="d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.192560 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571"} err="failed to get container status \"d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\": rpc error: code = NotFound desc = could not find container \"d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571\": container with ID starting with d0544d6e9e5203d5f48753432ceba6798e095e62663d83a98597a9c16e183571 not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.192579 4835 scope.go:117] "RemoveContainer" containerID="3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.192742 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6"} err="failed to get container status \"3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\": rpc error: code = NotFound desc = could not find container \"3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6\": container with ID starting with 3df8588c8c8361bcab836a9ca0da3fce1f8e17c4bb0843ae93cc9e81426977a6 not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.192763 4835 scope.go:117] "RemoveContainer" containerID="d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.192913 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba"} err="failed to get container status \"d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\": rpc error: code = NotFound desc = could not find container \"d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba\": container with ID starting with d5541568627b2c3a2224a42cb6cbafb55d96cec6af53dc3965dbc39ac509b9ba not found: ID does not exist" Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.961404 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" event={"ID":"b4c6c937-a46a-4dff-a306-27c6430430cd","Type":"ContainerStarted","Data":"02a2db31eae9143bf6472bff115ff2a8460bc9aa3c46745395e41ea1d858d859"} Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.961722 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" event={"ID":"b4c6c937-a46a-4dff-a306-27c6430430cd","Type":"ContainerStarted","Data":"6d113415de1e7883bb75dab1f163172c0ae9d1bfdd8be7920a48064408c94a23"} Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.961734 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" event={"ID":"b4c6c937-a46a-4dff-a306-27c6430430cd","Type":"ContainerStarted","Data":"3afe73fac23ddba01a9a3006970815b1c525b5801625e994ee7f72ec4a171b4b"} Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.961747 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" event={"ID":"b4c6c937-a46a-4dff-a306-27c6430430cd","Type":"ContainerStarted","Data":"2c8cf83fc9bbd35be47fc76c15bbebf056218dbca546f30642eff27f8ae58a60"} Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.961756 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" event={"ID":"b4c6c937-a46a-4dff-a306-27c6430430cd","Type":"ContainerStarted","Data":"3815a18ccbde64a37a71437780cc5d85de9b0b898032f35c903bcf62f7fbc940"} Feb 16 15:17:20 crc kubenswrapper[4835]: I0216 15:17:20.961766 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" event={"ID":"b4c6c937-a46a-4dff-a306-27c6430430cd","Type":"ContainerStarted","Data":"a8bbcdcc2752dbbc13dc299441fd1978a128f651b281d285ceab3d3283272256"} Feb 16 15:17:21 crc kubenswrapper[4835]: I0216 15:17:21.385348 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a790a22-cc2f-414e-b43b-fd6df80d19da" path="/var/lib/kubelet/pods/9a790a22-cc2f-414e-b43b-fd6df80d19da/volumes" Feb 16 15:17:22 crc kubenswrapper[4835]: I0216 15:17:22.980874 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" event={"ID":"b4c6c937-a46a-4dff-a306-27c6430430cd","Type":"ContainerStarted","Data":"a77491d8fa53dbc5a5e0fb43a23d7145ebc21b23ec0311d42cdb88a464904e33"} Feb 16 15:17:24 crc kubenswrapper[4835]: I0216 15:17:24.887321 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz"] Feb 16 15:17:24 crc kubenswrapper[4835]: I0216 15:17:24.888455 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" Feb 16 15:17:24 crc kubenswrapper[4835]: I0216 15:17:24.894948 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 15:17:24 crc kubenswrapper[4835]: I0216 15:17:24.897860 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 15:17:24 crc kubenswrapper[4835]: I0216 15:17:24.898328 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-2mntn" Feb 16 15:17:24 crc kubenswrapper[4835]: I0216 15:17:24.935317 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjh6b\" (UniqueName: \"kubernetes.io/projected/5176642c-2ed1-4ed0-bdb8-38863827e4db-kube-api-access-kjh6b\") pod \"obo-prometheus-operator-68bc856cb9-zdcgz\" (UID: \"5176642c-2ed1-4ed0-bdb8-38863827e4db\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" Feb 16 15:17:24 crc kubenswrapper[4835]: I0216 15:17:24.955165 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm"] Feb 16 15:17:24 crc kubenswrapper[4835]: I0216 15:17:24.955845 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:24 crc kubenswrapper[4835]: I0216 15:17:24.957936 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 16 15:17:24 crc kubenswrapper[4835]: I0216 15:17:24.964830 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-lwlnk" Feb 16 15:17:24 crc kubenswrapper[4835]: I0216 15:17:24.971706 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t"] Feb 16 15:17:24 crc kubenswrapper[4835]: I0216 15:17:24.972346 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.037008 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c645ff2-7682-4ecd-8f33-112527a557ae-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b699949c-qs52t\" (UID: \"5c645ff2-7682-4ecd-8f33-112527a557ae\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.037169 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjh6b\" (UniqueName: \"kubernetes.io/projected/5176642c-2ed1-4ed0-bdb8-38863827e4db-kube-api-access-kjh6b\") pod \"obo-prometheus-operator-68bc856cb9-zdcgz\" (UID: \"5176642c-2ed1-4ed0-bdb8-38863827e4db\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.037221 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c645ff2-7682-4ecd-8f33-112527a557ae-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b699949c-qs52t\" (UID: \"5c645ff2-7682-4ecd-8f33-112527a557ae\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.037252 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5b1bd882-cd0a-4194-8c67-fe43261fb379-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b699949c-pvlgm\" (UID: \"5b1bd882-cd0a-4194-8c67-fe43261fb379\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.037274 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5b1bd882-cd0a-4194-8c67-fe43261fb379-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b699949c-pvlgm\" (UID: \"5b1bd882-cd0a-4194-8c67-fe43261fb379\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.057155 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjh6b\" (UniqueName: \"kubernetes.io/projected/5176642c-2ed1-4ed0-bdb8-38863827e4db-kube-api-access-kjh6b\") pod \"obo-prometheus-operator-68bc856cb9-zdcgz\" (UID: \"5176642c-2ed1-4ed0-bdb8-38863827e4db\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.110632 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tgbsg"] Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.111245 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.112897 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.113070 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-m5czf" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.138772 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c645ff2-7682-4ecd-8f33-112527a557ae-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b699949c-qs52t\" (UID: \"5c645ff2-7682-4ecd-8f33-112527a557ae\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.138826 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ab77924-eead-4baa-bad7-82def29f30c8-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tgbsg\" (UID: \"8ab77924-eead-4baa-bad7-82def29f30c8\") " pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.138868 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c645ff2-7682-4ecd-8f33-112527a557ae-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b699949c-qs52t\" (UID: \"5c645ff2-7682-4ecd-8f33-112527a557ae\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.138894 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62m52\" (UniqueName: \"kubernetes.io/projected/8ab77924-eead-4baa-bad7-82def29f30c8-kube-api-access-62m52\") pod \"observability-operator-59bdc8b94-tgbsg\" (UID: \"8ab77924-eead-4baa-bad7-82def29f30c8\") " pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.138920 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5b1bd882-cd0a-4194-8c67-fe43261fb379-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b699949c-pvlgm\" (UID: \"5b1bd882-cd0a-4194-8c67-fe43261fb379\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.138942 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5b1bd882-cd0a-4194-8c67-fe43261fb379-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b699949c-pvlgm\" (UID: \"5b1bd882-cd0a-4194-8c67-fe43261fb379\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.142294 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5b1bd882-cd0a-4194-8c67-fe43261fb379-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b699949c-pvlgm\" (UID: \"5b1bd882-cd0a-4194-8c67-fe43261fb379\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.142989 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5b1bd882-cd0a-4194-8c67-fe43261fb379-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b699949c-pvlgm\" (UID: \"5b1bd882-cd0a-4194-8c67-fe43261fb379\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.143069 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c645ff2-7682-4ecd-8f33-112527a557ae-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b699949c-qs52t\" (UID: \"5c645ff2-7682-4ecd-8f33-112527a557ae\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.158466 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c645ff2-7682-4ecd-8f33-112527a557ae-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b699949c-qs52t\" (UID: \"5c645ff2-7682-4ecd-8f33-112527a557ae\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.207609 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.211210 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-rn857"] Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.211848 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.213204 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-rtcz7" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.235436 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators_5176642c-2ed1-4ed0-bdb8-38863827e4db_0(446e3dfaaaa56399d080a0b38bb6a438823001a03e89f895ede94c88a4b39a0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.235494 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators_5176642c-2ed1-4ed0-bdb8-38863827e4db_0(446e3dfaaaa56399d080a0b38bb6a438823001a03e89f895ede94c88a4b39a0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.235515 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators_5176642c-2ed1-4ed0-bdb8-38863827e4db_0(446e3dfaaaa56399d080a0b38bb6a438823001a03e89f895ede94c88a4b39a0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.235558 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators(5176642c-2ed1-4ed0-bdb8-38863827e4db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators(5176642c-2ed1-4ed0-bdb8-38863827e4db)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators_5176642c-2ed1-4ed0-bdb8-38863827e4db_0(446e3dfaaaa56399d080a0b38bb6a438823001a03e89f895ede94c88a4b39a0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" podUID="5176642c-2ed1-4ed0-bdb8-38863827e4db" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.239870 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ab77924-eead-4baa-bad7-82def29f30c8-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tgbsg\" (UID: \"8ab77924-eead-4baa-bad7-82def29f30c8\") " pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.239903 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27tbv\" (UniqueName: \"kubernetes.io/projected/efb00222-e09d-4776-9026-91280c520e73-kube-api-access-27tbv\") pod \"perses-operator-5bf474d74f-rn857\" (UID: \"efb00222-e09d-4776-9026-91280c520e73\") " pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.239935 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/efb00222-e09d-4776-9026-91280c520e73-openshift-service-ca\") pod \"perses-operator-5bf474d74f-rn857\" (UID: \"efb00222-e09d-4776-9026-91280c520e73\") " pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.239964 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62m52\" (UniqueName: \"kubernetes.io/projected/8ab77924-eead-4baa-bad7-82def29f30c8-kube-api-access-62m52\") pod \"observability-operator-59bdc8b94-tgbsg\" (UID: \"8ab77924-eead-4baa-bad7-82def29f30c8\") " pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.244126 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ab77924-eead-4baa-bad7-82def29f30c8-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tgbsg\" (UID: \"8ab77924-eead-4baa-bad7-82def29f30c8\") " pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.261177 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62m52\" (UniqueName: \"kubernetes.io/projected/8ab77924-eead-4baa-bad7-82def29f30c8-kube-api-access-62m52\") pod \"observability-operator-59bdc8b94-tgbsg\" (UID: \"8ab77924-eead-4baa-bad7-82def29f30c8\") " pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.269048 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.286826 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.291270 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators_5b1bd882-cd0a-4194-8c67-fe43261fb379_0(de22bc333fc594a1252fbfc544d61883fcf108c76a7d01cbf0e39ecb12b3a1f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.291321 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators_5b1bd882-cd0a-4194-8c67-fe43261fb379_0(de22bc333fc594a1252fbfc544d61883fcf108c76a7d01cbf0e39ecb12b3a1f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.291344 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators_5b1bd882-cd0a-4194-8c67-fe43261fb379_0(de22bc333fc594a1252fbfc544d61883fcf108c76a7d01cbf0e39ecb12b3a1f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.291410 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators(5b1bd882-cd0a-4194-8c67-fe43261fb379)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators(5b1bd882-cd0a-4194-8c67-fe43261fb379)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators_5b1bd882-cd0a-4194-8c67-fe43261fb379_0(de22bc333fc594a1252fbfc544d61883fcf108c76a7d01cbf0e39ecb12b3a1f8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" podUID="5b1bd882-cd0a-4194-8c67-fe43261fb379" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.310638 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators_5c645ff2-7682-4ecd-8f33-112527a557ae_0(3ea1f58360ad1cb4183c2a53019cd139de00bcdf18540e44789188a88cdd6159): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.310702 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators_5c645ff2-7682-4ecd-8f33-112527a557ae_0(3ea1f58360ad1cb4183c2a53019cd139de00bcdf18540e44789188a88cdd6159): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.310722 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators_5c645ff2-7682-4ecd-8f33-112527a557ae_0(3ea1f58360ad1cb4183c2a53019cd139de00bcdf18540e44789188a88cdd6159): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.310914 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators(5c645ff2-7682-4ecd-8f33-112527a557ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators(5c645ff2-7682-4ecd-8f33-112527a557ae)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators_5c645ff2-7682-4ecd-8f33-112527a557ae_0(3ea1f58360ad1cb4183c2a53019cd139de00bcdf18540e44789188a88cdd6159): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" podUID="5c645ff2-7682-4ecd-8f33-112527a557ae" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.341074 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/efb00222-e09d-4776-9026-91280c520e73-openshift-service-ca\") pod \"perses-operator-5bf474d74f-rn857\" (UID: \"efb00222-e09d-4776-9026-91280c520e73\") " pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.341905 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/efb00222-e09d-4776-9026-91280c520e73-openshift-service-ca\") pod \"perses-operator-5bf474d74f-rn857\" (UID: \"efb00222-e09d-4776-9026-91280c520e73\") " pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.347731 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27tbv\" (UniqueName: \"kubernetes.io/projected/efb00222-e09d-4776-9026-91280c520e73-kube-api-access-27tbv\") pod \"perses-operator-5bf474d74f-rn857\" (UID: \"efb00222-e09d-4776-9026-91280c520e73\") " pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.371377 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27tbv\" (UniqueName: \"kubernetes.io/projected/efb00222-e09d-4776-9026-91280c520e73-kube-api-access-27tbv\") pod \"perses-operator-5bf474d74f-rn857\" (UID: \"efb00222-e09d-4776-9026-91280c520e73\") " pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.432780 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.459671 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tgbsg_openshift-operators_8ab77924-eead-4baa-bad7-82def29f30c8_0(67c1e7002738ddd3a5ebee4cf21f66a7c0cba2cd312703e7ab11602a728b05b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.459754 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tgbsg_openshift-operators_8ab77924-eead-4baa-bad7-82def29f30c8_0(67c1e7002738ddd3a5ebee4cf21f66a7c0cba2cd312703e7ab11602a728b05b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.459776 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tgbsg_openshift-operators_8ab77924-eead-4baa-bad7-82def29f30c8_0(67c1e7002738ddd3a5ebee4cf21f66a7c0cba2cd312703e7ab11602a728b05b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.459843 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-tgbsg_openshift-operators(8ab77924-eead-4baa-bad7-82def29f30c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-tgbsg_openshift-operators(8ab77924-eead-4baa-bad7-82def29f30c8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tgbsg_openshift-operators_8ab77924-eead-4baa-bad7-82def29f30c8_0(67c1e7002738ddd3a5ebee4cf21f66a7c0cba2cd312703e7ab11602a728b05b1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" podUID="8ab77924-eead-4baa-bad7-82def29f30c8" Feb 16 15:17:25 crc kubenswrapper[4835]: I0216 15:17:25.527622 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.547164 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-rn857_openshift-operators_efb00222-e09d-4776-9026-91280c520e73_0(cd441a67fc09f9abb675a2d14d70d09054163f062274985365f0e90d63cae9d7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.547236 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-rn857_openshift-operators_efb00222-e09d-4776-9026-91280c520e73_0(cd441a67fc09f9abb675a2d14d70d09054163f062274985365f0e90d63cae9d7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.547264 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-rn857_openshift-operators_efb00222-e09d-4776-9026-91280c520e73_0(cd441a67fc09f9abb675a2d14d70d09054163f062274985365f0e90d63cae9d7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:25 crc kubenswrapper[4835]: E0216 15:17:25.547310 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-rn857_openshift-operators(efb00222-e09d-4776-9026-91280c520e73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-rn857_openshift-operators(efb00222-e09d-4776-9026-91280c520e73)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-rn857_openshift-operators_efb00222-e09d-4776-9026-91280c520e73_0(cd441a67fc09f9abb675a2d14d70d09054163f062274985365f0e90d63cae9d7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-rn857" podUID="efb00222-e09d-4776-9026-91280c520e73" Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.002339 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" event={"ID":"b4c6c937-a46a-4dff-a306-27c6430430cd","Type":"ContainerStarted","Data":"fe092fda67fa084b2b644e8ee6b5262b11d4cd3b4720ac22e47800792ba2479d"} Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.002778 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.002833 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.031346 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" podStartSLOduration=7.031327092 podStartE2EDuration="7.031327092s" podCreationTimestamp="2026-02-16 15:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:17:26.030957063 +0000 UTC m=+595.322949988" watchObservedRunningTime="2026-02-16 15:17:26.031327092 +0000 UTC m=+595.323319987" Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.048899 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.156029 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz"] Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.156135 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.156596 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.181172 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tgbsg"] Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.181301 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.181834 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.183034 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm"] Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.183139 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.183615 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.184517 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators_5176642c-2ed1-4ed0-bdb8-38863827e4db_0(ca623f815371d12578f57be38874badc3024355218a3cfe6c31abedd20439bad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.184592 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators_5176642c-2ed1-4ed0-bdb8-38863827e4db_0(ca623f815371d12578f57be38874badc3024355218a3cfe6c31abedd20439bad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.184632 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators_5176642c-2ed1-4ed0-bdb8-38863827e4db_0(ca623f815371d12578f57be38874badc3024355218a3cfe6c31abedd20439bad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.184673 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators(5176642c-2ed1-4ed0-bdb8-38863827e4db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators(5176642c-2ed1-4ed0-bdb8-38863827e4db)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators_5176642c-2ed1-4ed0-bdb8-38863827e4db_0(ca623f815371d12578f57be38874badc3024355218a3cfe6c31abedd20439bad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" podUID="5176642c-2ed1-4ed0-bdb8-38863827e4db" Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.196715 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-rn857"] Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.196843 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.197328 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.200261 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t"] Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.200629 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:26 crc kubenswrapper[4835]: I0216 15:17:26.201100 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.275379 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tgbsg_openshift-operators_8ab77924-eead-4baa-bad7-82def29f30c8_0(79cec6b6ce36bb06eb19457cd512bff955e4d5cc88e7c94b992042a0a6909387): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.275442 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tgbsg_openshift-operators_8ab77924-eead-4baa-bad7-82def29f30c8_0(79cec6b6ce36bb06eb19457cd512bff955e4d5cc88e7c94b992042a0a6909387): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.275465 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tgbsg_openshift-operators_8ab77924-eead-4baa-bad7-82def29f30c8_0(79cec6b6ce36bb06eb19457cd512bff955e4d5cc88e7c94b992042a0a6909387): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.275506 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-tgbsg_openshift-operators(8ab77924-eead-4baa-bad7-82def29f30c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-tgbsg_openshift-operators(8ab77924-eead-4baa-bad7-82def29f30c8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tgbsg_openshift-operators_8ab77924-eead-4baa-bad7-82def29f30c8_0(79cec6b6ce36bb06eb19457cd512bff955e4d5cc88e7c94b992042a0a6909387): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" podUID="8ab77924-eead-4baa-bad7-82def29f30c8" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.293795 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators_5b1bd882-cd0a-4194-8c67-fe43261fb379_0(4396355791e6d3fb175e7681b2fa83ab5778f33433219cd0fab25efdb1d00af1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.293887 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators_5b1bd882-cd0a-4194-8c67-fe43261fb379_0(4396355791e6d3fb175e7681b2fa83ab5778f33433219cd0fab25efdb1d00af1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.293926 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators_5b1bd882-cd0a-4194-8c67-fe43261fb379_0(4396355791e6d3fb175e7681b2fa83ab5778f33433219cd0fab25efdb1d00af1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.293985 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators(5b1bd882-cd0a-4194-8c67-fe43261fb379)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators(5b1bd882-cd0a-4194-8c67-fe43261fb379)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators_5b1bd882-cd0a-4194-8c67-fe43261fb379_0(4396355791e6d3fb175e7681b2fa83ab5778f33433219cd0fab25efdb1d00af1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" podUID="5b1bd882-cd0a-4194-8c67-fe43261fb379" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.309815 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-rn857_openshift-operators_efb00222-e09d-4776-9026-91280c520e73_0(2753966c71565f89b7d64b08d22b2dce7b672fd46845bd569b525e7918d998ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.309970 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-rn857_openshift-operators_efb00222-e09d-4776-9026-91280c520e73_0(2753966c71565f89b7d64b08d22b2dce7b672fd46845bd569b525e7918d998ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.310015 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-rn857_openshift-operators_efb00222-e09d-4776-9026-91280c520e73_0(2753966c71565f89b7d64b08d22b2dce7b672fd46845bd569b525e7918d998ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.310072 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-rn857_openshift-operators(efb00222-e09d-4776-9026-91280c520e73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-rn857_openshift-operators(efb00222-e09d-4776-9026-91280c520e73)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-rn857_openshift-operators_efb00222-e09d-4776-9026-91280c520e73_0(2753966c71565f89b7d64b08d22b2dce7b672fd46845bd569b525e7918d998ed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-rn857" podUID="efb00222-e09d-4776-9026-91280c520e73" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.315387 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators_5c645ff2-7682-4ecd-8f33-112527a557ae_0(71c27c8b21c7301dc2b9b84af7ed25070e33926bdf13db2fa9763b05b4377209): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.315467 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators_5c645ff2-7682-4ecd-8f33-112527a557ae_0(71c27c8b21c7301dc2b9b84af7ed25070e33926bdf13db2fa9763b05b4377209): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.315491 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators_5c645ff2-7682-4ecd-8f33-112527a557ae_0(71c27c8b21c7301dc2b9b84af7ed25070e33926bdf13db2fa9763b05b4377209): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:26 crc kubenswrapper[4835]: E0216 15:17:26.315575 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators(5c645ff2-7682-4ecd-8f33-112527a557ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators(5c645ff2-7682-4ecd-8f33-112527a557ae)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators_5c645ff2-7682-4ecd-8f33-112527a557ae_0(71c27c8b21c7301dc2b9b84af7ed25070e33926bdf13db2fa9763b05b4377209): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" podUID="5c645ff2-7682-4ecd-8f33-112527a557ae" Feb 16 15:17:27 crc kubenswrapper[4835]: I0216 15:17:27.014734 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:27 crc kubenswrapper[4835]: I0216 15:17:27.078028 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:29 crc kubenswrapper[4835]: I0216 15:17:29.378350 4835 scope.go:117] "RemoveContainer" containerID="98f8e6d7b44084a40632591b1774ef5147c6f4e787ac6fb60321e2810fa9ec35" Feb 16 15:17:29 crc kubenswrapper[4835]: E0216 15:17:29.379814 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gncxk_openshift-multus(36a4edb0-ce1a-4b59-b1f9-f5b43255de2d)\"" pod="openshift-multus/multus-gncxk" podUID="36a4edb0-ce1a-4b59-b1f9-f5b43255de2d" Feb 16 15:17:37 crc kubenswrapper[4835]: I0216 15:17:37.378459 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:37 crc kubenswrapper[4835]: I0216 15:17:37.379355 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:37 crc kubenswrapper[4835]: E0216 15:17:37.416268 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators_5c645ff2-7682-4ecd-8f33-112527a557ae_0(688412fd269f9bc59b8b6cfc3946075c9541f10234315b8647c1fad0ade5a6ad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:17:37 crc kubenswrapper[4835]: E0216 15:17:37.416359 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators_5c645ff2-7682-4ecd-8f33-112527a557ae_0(688412fd269f9bc59b8b6cfc3946075c9541f10234315b8647c1fad0ade5a6ad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:37 crc kubenswrapper[4835]: E0216 15:17:37.416394 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators_5c645ff2-7682-4ecd-8f33-112527a557ae_0(688412fd269f9bc59b8b6cfc3946075c9541f10234315b8647c1fad0ade5a6ad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:37 crc kubenswrapper[4835]: E0216 15:17:37.416607 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators(5c645ff2-7682-4ecd-8f33-112527a557ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators(5c645ff2-7682-4ecd-8f33-112527a557ae)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-qs52t_openshift-operators_5c645ff2-7682-4ecd-8f33-112527a557ae_0(688412fd269f9bc59b8b6cfc3946075c9541f10234315b8647c1fad0ade5a6ad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" podUID="5c645ff2-7682-4ecd-8f33-112527a557ae" Feb 16 15:17:39 crc kubenswrapper[4835]: I0216 15:17:39.378069 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:39 crc kubenswrapper[4835]: I0216 15:17:39.379676 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:39 crc kubenswrapper[4835]: E0216 15:17:39.406163 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators_5b1bd882-cd0a-4194-8c67-fe43261fb379_0(00105b5ba9b175d3b4a85114397b3cad7aed04cefb1dae756952d7b1812f22f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:17:39 crc kubenswrapper[4835]: E0216 15:17:39.406241 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators_5b1bd882-cd0a-4194-8c67-fe43261fb379_0(00105b5ba9b175d3b4a85114397b3cad7aed04cefb1dae756952d7b1812f22f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:39 crc kubenswrapper[4835]: E0216 15:17:39.406272 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators_5b1bd882-cd0a-4194-8c67-fe43261fb379_0(00105b5ba9b175d3b4a85114397b3cad7aed04cefb1dae756952d7b1812f22f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:39 crc kubenswrapper[4835]: E0216 15:17:39.406359 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators(5b1bd882-cd0a-4194-8c67-fe43261fb379)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators(5b1bd882-cd0a-4194-8c67-fe43261fb379)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_openshift-operators_5b1bd882-cd0a-4194-8c67-fe43261fb379_0(00105b5ba9b175d3b4a85114397b3cad7aed04cefb1dae756952d7b1812f22f9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" podUID="5b1bd882-cd0a-4194-8c67-fe43261fb379" Feb 16 15:17:40 crc kubenswrapper[4835]: I0216 15:17:40.379020 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:40 crc kubenswrapper[4835]: I0216 15:17:40.379281 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:40 crc kubenswrapper[4835]: E0216 15:17:40.440781 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tgbsg_openshift-operators_8ab77924-eead-4baa-bad7-82def29f30c8_0(05149587ef1c794e1bdf63f3b630f09c2dc21d66e2b64cfe1ebc59c02e134678): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:17:40 crc kubenswrapper[4835]: E0216 15:17:40.441043 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tgbsg_openshift-operators_8ab77924-eead-4baa-bad7-82def29f30c8_0(05149587ef1c794e1bdf63f3b630f09c2dc21d66e2b64cfe1ebc59c02e134678): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:40 crc kubenswrapper[4835]: E0216 15:17:40.441062 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tgbsg_openshift-operators_8ab77924-eead-4baa-bad7-82def29f30c8_0(05149587ef1c794e1bdf63f3b630f09c2dc21d66e2b64cfe1ebc59c02e134678): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:40 crc kubenswrapper[4835]: E0216 15:17:40.441111 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-tgbsg_openshift-operators(8ab77924-eead-4baa-bad7-82def29f30c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-tgbsg_openshift-operators(8ab77924-eead-4baa-bad7-82def29f30c8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-tgbsg_openshift-operators_8ab77924-eead-4baa-bad7-82def29f30c8_0(05149587ef1c794e1bdf63f3b630f09c2dc21d66e2b64cfe1ebc59c02e134678): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" podUID="8ab77924-eead-4baa-bad7-82def29f30c8" Feb 16 15:17:41 crc kubenswrapper[4835]: I0216 15:17:41.378188 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:41 crc kubenswrapper[4835]: I0216 15:17:41.378659 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" Feb 16 15:17:41 crc kubenswrapper[4835]: I0216 15:17:41.380851 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:41 crc kubenswrapper[4835]: I0216 15:17:41.382798 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" Feb 16 15:17:41 crc kubenswrapper[4835]: E0216 15:17:41.417980 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-rn857_openshift-operators_efb00222-e09d-4776-9026-91280c520e73_0(09bd0815e869408911fd6d030ae156044bf69c6d43cc38ea04c3dc083a3eb764): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:17:41 crc kubenswrapper[4835]: E0216 15:17:41.418037 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-rn857_openshift-operators_efb00222-e09d-4776-9026-91280c520e73_0(09bd0815e869408911fd6d030ae156044bf69c6d43cc38ea04c3dc083a3eb764): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:41 crc kubenswrapper[4835]: E0216 15:17:41.418062 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-rn857_openshift-operators_efb00222-e09d-4776-9026-91280c520e73_0(09bd0815e869408911fd6d030ae156044bf69c6d43cc38ea04c3dc083a3eb764): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:41 crc kubenswrapper[4835]: E0216 15:17:41.418103 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-rn857_openshift-operators(efb00222-e09d-4776-9026-91280c520e73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-rn857_openshift-operators(efb00222-e09d-4776-9026-91280c520e73)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-rn857_openshift-operators_efb00222-e09d-4776-9026-91280c520e73_0(09bd0815e869408911fd6d030ae156044bf69c6d43cc38ea04c3dc083a3eb764): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-rn857" podUID="efb00222-e09d-4776-9026-91280c520e73" Feb 16 15:17:41 crc kubenswrapper[4835]: E0216 15:17:41.422469 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators_5176642c-2ed1-4ed0-bdb8-38863827e4db_0(79106ea2ac711ac6fac15f67f050f49c36e36489c6da312e2d4942a96ce8fffd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:17:41 crc kubenswrapper[4835]: E0216 15:17:41.422508 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators_5176642c-2ed1-4ed0-bdb8-38863827e4db_0(79106ea2ac711ac6fac15f67f050f49c36e36489c6da312e2d4942a96ce8fffd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" Feb 16 15:17:41 crc kubenswrapper[4835]: E0216 15:17:41.422523 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators_5176642c-2ed1-4ed0-bdb8-38863827e4db_0(79106ea2ac711ac6fac15f67f050f49c36e36489c6da312e2d4942a96ce8fffd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" Feb 16 15:17:41 crc kubenswrapper[4835]: E0216 15:17:41.422560 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators(5176642c-2ed1-4ed0-bdb8-38863827e4db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators(5176642c-2ed1-4ed0-bdb8-38863827e4db)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zdcgz_openshift-operators_5176642c-2ed1-4ed0-bdb8-38863827e4db_0(79106ea2ac711ac6fac15f67f050f49c36e36489c6da312e2d4942a96ce8fffd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" podUID="5176642c-2ed1-4ed0-bdb8-38863827e4db" Feb 16 15:17:42 crc kubenswrapper[4835]: I0216 15:17:42.378682 4835 scope.go:117] "RemoveContainer" containerID="98f8e6d7b44084a40632591b1774ef5147c6f4e787ac6fb60321e2810fa9ec35" Feb 16 15:17:43 crc kubenswrapper[4835]: I0216 15:17:43.105054 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gncxk_36a4edb0-ce1a-4b59-b1f9-f5b43255de2d/kube-multus/2.log" Feb 16 15:17:43 crc kubenswrapper[4835]: I0216 15:17:43.105491 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gncxk" event={"ID":"36a4edb0-ce1a-4b59-b1f9-f5b43255de2d","Type":"ContainerStarted","Data":"d7e41bc03afb7acfa6a06d7d06765f7d688edbe6153d498e744e6b157aaf11df"} Feb 16 15:17:49 crc kubenswrapper[4835]: I0216 15:17:49.643761 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xtwj4" Feb 16 15:17:51 crc kubenswrapper[4835]: I0216 15:17:51.378704 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:51 crc kubenswrapper[4835]: I0216 15:17:51.381600 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" Feb 16 15:17:51 crc kubenswrapper[4835]: I0216 15:17:51.646884 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t"] Feb 16 15:17:51 crc kubenswrapper[4835]: W0216 15:17:51.655403 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c645ff2_7682_4ecd_8f33_112527a557ae.slice/crio-9752e3abc5d66db2ff03090fb965a203cdc0ce4a332aa6bb1d788356441ae49d WatchSource:0}: Error finding container 9752e3abc5d66db2ff03090fb965a203cdc0ce4a332aa6bb1d788356441ae49d: Status 404 returned error can't find the container with id 9752e3abc5d66db2ff03090fb965a203cdc0ce4a332aa6bb1d788356441ae49d Feb 16 15:17:52 crc kubenswrapper[4835]: I0216 15:17:52.150467 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" event={"ID":"5c645ff2-7682-4ecd-8f33-112527a557ae","Type":"ContainerStarted","Data":"9752e3abc5d66db2ff03090fb965a203cdc0ce4a332aa6bb1d788356441ae49d"} Feb 16 15:17:53 crc kubenswrapper[4835]: I0216 15:17:53.378812 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:53 crc kubenswrapper[4835]: I0216 15:17:53.379505 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" Feb 16 15:17:53 crc kubenswrapper[4835]: I0216 15:17:53.761807 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm"] Feb 16 15:17:53 crc kubenswrapper[4835]: W0216 15:17:53.770618 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b1bd882_cd0a_4194_8c67_fe43261fb379.slice/crio-697e3f722f76f5803b1f92d8f88f5839074893913f9a9e678857172fac88e5d3 WatchSource:0}: Error finding container 697e3f722f76f5803b1f92d8f88f5839074893913f9a9e678857172fac88e5d3: Status 404 returned error can't find the container with id 697e3f722f76f5803b1f92d8f88f5839074893913f9a9e678857172fac88e5d3 Feb 16 15:17:54 crc kubenswrapper[4835]: I0216 15:17:54.166227 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" event={"ID":"5b1bd882-cd0a-4194-8c67-fe43261fb379","Type":"ContainerStarted","Data":"697e3f722f76f5803b1f92d8f88f5839074893913f9a9e678857172fac88e5d3"} Feb 16 15:17:54 crc kubenswrapper[4835]: I0216 15:17:54.378672 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:54 crc kubenswrapper[4835]: I0216 15:17:54.379015 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" Feb 16 15:17:54 crc kubenswrapper[4835]: I0216 15:17:54.379557 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" Feb 16 15:17:54 crc kubenswrapper[4835]: I0216 15:17:54.379553 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:17:55 crc kubenswrapper[4835]: I0216 15:17:55.436451 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz"] Feb 16 15:17:55 crc kubenswrapper[4835]: W0216 15:17:55.448592 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5176642c_2ed1_4ed0_bdb8_38863827e4db.slice/crio-09a127a8241ecb45f946bded8ebab6645881d9ac412f9eb22ea7519eb5a12626 WatchSource:0}: Error finding container 09a127a8241ecb45f946bded8ebab6645881d9ac412f9eb22ea7519eb5a12626: Status 404 returned error can't find the container with id 09a127a8241ecb45f946bded8ebab6645881d9ac412f9eb22ea7519eb5a12626 Feb 16 15:17:55 crc kubenswrapper[4835]: I0216 15:17:55.468862 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tgbsg"] Feb 16 15:17:56 crc kubenswrapper[4835]: I0216 15:17:56.180514 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" event={"ID":"5b1bd882-cd0a-4194-8c67-fe43261fb379","Type":"ContainerStarted","Data":"ca6cf83aa7f49223c56e9f2860d322ce57ad7a787a3a35df80cf214e7daf6208"} Feb 16 15:17:56 crc kubenswrapper[4835]: I0216 15:17:56.182426 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" event={"ID":"5c645ff2-7682-4ecd-8f33-112527a557ae","Type":"ContainerStarted","Data":"2f38c94cb2c6b2007a086ba7941c2a93ca2b994e674428fdfa166ae833613869"} Feb 16 15:17:56 crc kubenswrapper[4835]: I0216 15:17:56.183465 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" event={"ID":"5176642c-2ed1-4ed0-bdb8-38863827e4db","Type":"ContainerStarted","Data":"09a127a8241ecb45f946bded8ebab6645881d9ac412f9eb22ea7519eb5a12626"} Feb 16 15:17:56 crc kubenswrapper[4835]: I0216 15:17:56.184313 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" event={"ID":"8ab77924-eead-4baa-bad7-82def29f30c8","Type":"ContainerStarted","Data":"fd40e5cc6d02c0572fdb98f1190cf55cd87e2cfdcc09f9484ffca69f1b79bc00"} Feb 16 15:17:56 crc kubenswrapper[4835]: I0216 15:17:56.198846 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-pvlgm" podStartSLOduration=30.670685642 podStartE2EDuration="32.198824392s" podCreationTimestamp="2026-02-16 15:17:24 +0000 UTC" firstStartedPulling="2026-02-16 15:17:53.772703542 +0000 UTC m=+623.064696437" lastFinishedPulling="2026-02-16 15:17:55.300842292 +0000 UTC m=+624.592835187" observedRunningTime="2026-02-16 15:17:56.196137572 +0000 UTC m=+625.488130477" watchObservedRunningTime="2026-02-16 15:17:56.198824392 +0000 UTC m=+625.490817287" Feb 16 15:17:56 crc kubenswrapper[4835]: I0216 15:17:56.246825 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b699949c-qs52t" podStartSLOduration=28.586049875 podStartE2EDuration="32.246802273s" podCreationTimestamp="2026-02-16 15:17:24 +0000 UTC" firstStartedPulling="2026-02-16 15:17:51.657808852 +0000 UTC m=+620.949801757" lastFinishedPulling="2026-02-16 15:17:55.31856124 +0000 UTC m=+624.610554155" observedRunningTime="2026-02-16 15:17:56.230788719 +0000 UTC m=+625.522781614" watchObservedRunningTime="2026-02-16 15:17:56.246802273 +0000 UTC m=+625.538795168" Feb 16 15:17:56 crc kubenswrapper[4835]: I0216 15:17:56.378409 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:56 crc kubenswrapper[4835]: I0216 15:17:56.379114 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:17:56 crc kubenswrapper[4835]: I0216 15:17:56.572204 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-rn857"] Feb 16 15:17:56 crc kubenswrapper[4835]: W0216 15:17:56.578923 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefb00222_e09d_4776_9026_91280c520e73.slice/crio-e363b5e81beb2b67fe32d3a5a927b074940128a15992917db733bf36dc1a749f WatchSource:0}: Error finding container e363b5e81beb2b67fe32d3a5a927b074940128a15992917db733bf36dc1a749f: Status 404 returned error can't find the container with id e363b5e81beb2b67fe32d3a5a927b074940128a15992917db733bf36dc1a749f Feb 16 15:17:57 crc kubenswrapper[4835]: I0216 15:17:57.195676 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-rn857" event={"ID":"efb00222-e09d-4776-9026-91280c520e73","Type":"ContainerStarted","Data":"e363b5e81beb2b67fe32d3a5a927b074940128a15992917db733bf36dc1a749f"} Feb 16 15:17:58 crc kubenswrapper[4835]: I0216 15:17:58.204189 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" event={"ID":"5176642c-2ed1-4ed0-bdb8-38863827e4db","Type":"ContainerStarted","Data":"dcb9732246f043e8561c75460024b1fcd376f5dce498087cf55a300e78e80a97"} Feb 16 15:17:58 crc kubenswrapper[4835]: I0216 15:17:58.225813 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zdcgz" podStartSLOduration=31.873040734 podStartE2EDuration="34.225794827s" podCreationTimestamp="2026-02-16 15:17:24 +0000 UTC" firstStartedPulling="2026-02-16 15:17:55.45112425 +0000 UTC m=+624.743117145" lastFinishedPulling="2026-02-16 15:17:57.803878343 +0000 UTC m=+627.095871238" observedRunningTime="2026-02-16 15:17:58.225734396 +0000 UTC m=+627.517727291" watchObservedRunningTime="2026-02-16 15:17:58.225794827 +0000 UTC m=+627.517787712" Feb 16 15:18:01 crc kubenswrapper[4835]: I0216 15:18:01.221242 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" event={"ID":"8ab77924-eead-4baa-bad7-82def29f30c8","Type":"ContainerStarted","Data":"bee4cd4f3b661e0e060e47dc253611b887adfaac16c2b43923f5f4ca0144dd6e"} Feb 16 15:18:01 crc kubenswrapper[4835]: I0216 15:18:01.222321 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:18:01 crc kubenswrapper[4835]: I0216 15:18:01.222677 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-rn857" event={"ID":"efb00222-e09d-4776-9026-91280c520e73","Type":"ContainerStarted","Data":"3c3a3082ff00abe66e75b35ceda06348cf597593a8a3526a58af92398ee58cd5"} Feb 16 15:18:01 crc kubenswrapper[4835]: I0216 15:18:01.222875 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:18:01 crc kubenswrapper[4835]: I0216 15:18:01.237258 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" Feb 16 15:18:01 crc kubenswrapper[4835]: I0216 15:18:01.252522 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-tgbsg" podStartSLOduration=31.326533346 podStartE2EDuration="36.252498944s" podCreationTimestamp="2026-02-16 15:17:25 +0000 UTC" firstStartedPulling="2026-02-16 15:17:55.478749264 +0000 UTC m=+624.770742159" lastFinishedPulling="2026-02-16 15:18:00.404714862 +0000 UTC m=+629.696707757" observedRunningTime="2026-02-16 15:18:01.246114258 +0000 UTC m=+630.538107163" watchObservedRunningTime="2026-02-16 15:18:01.252498944 +0000 UTC m=+630.544491839" Feb 16 15:18:01 crc kubenswrapper[4835]: I0216 15:18:01.269796 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-rn857" podStartSLOduration=32.465812917 podStartE2EDuration="36.26976843s" podCreationTimestamp="2026-02-16 15:17:25 +0000 UTC" firstStartedPulling="2026-02-16 15:17:56.581774348 +0000 UTC m=+625.873767243" lastFinishedPulling="2026-02-16 15:18:00.385729861 +0000 UTC m=+629.677722756" observedRunningTime="2026-02-16 15:18:01.265459459 +0000 UTC m=+630.557452344" watchObservedRunningTime="2026-02-16 15:18:01.26976843 +0000 UTC m=+630.561761335" Feb 16 15:18:05 crc kubenswrapper[4835]: I0216 15:18:05.531090 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-rn857" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.015714 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mnqkl"] Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.017396 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mnqkl" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.021549 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.021877 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.023553 4835 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-g7h9m" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.027084 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mnqkl"] Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.038781 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-qhxb8"] Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.039717 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qhxb8" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.043651 4835 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-b8qxt" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.050121 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-z9xgl"] Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.050917 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-z9xgl" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.052739 4835 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5798z" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.068804 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qhxb8"] Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.072172 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-z9xgl"] Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.119950 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5wdq\" (UniqueName: \"kubernetes.io/projected/845954bd-7996-428a-8b39-9746616e7e1e-kube-api-access-h5wdq\") pod \"cert-manager-858654f9db-qhxb8\" (UID: \"845954bd-7996-428a-8b39-9746616e7e1e\") " pod="cert-manager/cert-manager-858654f9db-qhxb8" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.120033 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrwqr\" (UniqueName: \"kubernetes.io/projected/b37e2863-cd3c-45b3-b774-ad506f0abeef-kube-api-access-lrwqr\") pod \"cert-manager-webhook-687f57d79b-z9xgl\" (UID: \"b37e2863-cd3c-45b3-b774-ad506f0abeef\") " pod="cert-manager/cert-manager-webhook-687f57d79b-z9xgl" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.120072 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7nzf\" (UniqueName: \"kubernetes.io/projected/74f1c44a-b017-4269-96c0-9dc9359becef-kube-api-access-b7nzf\") pod \"cert-manager-cainjector-cf98fcc89-mnqkl\" (UID: \"74f1c44a-b017-4269-96c0-9dc9359becef\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mnqkl" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.220920 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrwqr\" (UniqueName: \"kubernetes.io/projected/b37e2863-cd3c-45b3-b774-ad506f0abeef-kube-api-access-lrwqr\") pod \"cert-manager-webhook-687f57d79b-z9xgl\" (UID: \"b37e2863-cd3c-45b3-b774-ad506f0abeef\") " pod="cert-manager/cert-manager-webhook-687f57d79b-z9xgl" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.220987 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7nzf\" (UniqueName: \"kubernetes.io/projected/74f1c44a-b017-4269-96c0-9dc9359becef-kube-api-access-b7nzf\") pod \"cert-manager-cainjector-cf98fcc89-mnqkl\" (UID: \"74f1c44a-b017-4269-96c0-9dc9359becef\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mnqkl" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.221036 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5wdq\" (UniqueName: \"kubernetes.io/projected/845954bd-7996-428a-8b39-9746616e7e1e-kube-api-access-h5wdq\") pod \"cert-manager-858654f9db-qhxb8\" (UID: \"845954bd-7996-428a-8b39-9746616e7e1e\") " pod="cert-manager/cert-manager-858654f9db-qhxb8" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.237670 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrwqr\" (UniqueName: \"kubernetes.io/projected/b37e2863-cd3c-45b3-b774-ad506f0abeef-kube-api-access-lrwqr\") pod \"cert-manager-webhook-687f57d79b-z9xgl\" (UID: \"b37e2863-cd3c-45b3-b774-ad506f0abeef\") " pod="cert-manager/cert-manager-webhook-687f57d79b-z9xgl" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.237698 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5wdq\" (UniqueName: \"kubernetes.io/projected/845954bd-7996-428a-8b39-9746616e7e1e-kube-api-access-h5wdq\") pod \"cert-manager-858654f9db-qhxb8\" (UID: \"845954bd-7996-428a-8b39-9746616e7e1e\") " pod="cert-manager/cert-manager-858654f9db-qhxb8" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.239324 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7nzf\" (UniqueName: \"kubernetes.io/projected/74f1c44a-b017-4269-96c0-9dc9359becef-kube-api-access-b7nzf\") pod \"cert-manager-cainjector-cf98fcc89-mnqkl\" (UID: \"74f1c44a-b017-4269-96c0-9dc9359becef\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mnqkl" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.346771 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mnqkl" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.362708 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qhxb8" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.368816 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-z9xgl" Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.620170 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qhxb8"] Feb 16 15:18:11 crc kubenswrapper[4835]: W0216 15:18:11.660361 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb37e2863_cd3c_45b3_b774_ad506f0abeef.slice/crio-e1ec041504a952ab36ebe85b4f828a796f15a3db456907ca8418999507094d38 WatchSource:0}: Error finding container e1ec041504a952ab36ebe85b4f828a796f15a3db456907ca8418999507094d38: Status 404 returned error can't find the container with id e1ec041504a952ab36ebe85b4f828a796f15a3db456907ca8418999507094d38 Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.660888 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-z9xgl"] Feb 16 15:18:11 crc kubenswrapper[4835]: W0216 15:18:11.789775 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74f1c44a_b017_4269_96c0_9dc9359becef.slice/crio-bf91b2bc9f1315db38f3016ea7fcfc28508b1d12b5e6fa4dbbebb78c6e843d20 WatchSource:0}: Error finding container bf91b2bc9f1315db38f3016ea7fcfc28508b1d12b5e6fa4dbbebb78c6e843d20: Status 404 returned error can't find the container with id bf91b2bc9f1315db38f3016ea7fcfc28508b1d12b5e6fa4dbbebb78c6e843d20 Feb 16 15:18:11 crc kubenswrapper[4835]: I0216 15:18:11.789924 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mnqkl"] Feb 16 15:18:12 crc kubenswrapper[4835]: I0216 15:18:12.318463 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-z9xgl" event={"ID":"b37e2863-cd3c-45b3-b774-ad506f0abeef","Type":"ContainerStarted","Data":"e1ec041504a952ab36ebe85b4f828a796f15a3db456907ca8418999507094d38"} Feb 16 15:18:12 crc kubenswrapper[4835]: I0216 15:18:12.320752 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mnqkl" event={"ID":"74f1c44a-b017-4269-96c0-9dc9359becef","Type":"ContainerStarted","Data":"bf91b2bc9f1315db38f3016ea7fcfc28508b1d12b5e6fa4dbbebb78c6e843d20"} Feb 16 15:18:12 crc kubenswrapper[4835]: I0216 15:18:12.323477 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qhxb8" event={"ID":"845954bd-7996-428a-8b39-9746616e7e1e","Type":"ContainerStarted","Data":"bac010123e2aae4810fefa7b2cb1690c46a7753b1dcb2ef411a6b51be0b57cec"} Feb 16 15:18:15 crc kubenswrapper[4835]: I0216 15:18:15.341485 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-z9xgl" event={"ID":"b37e2863-cd3c-45b3-b774-ad506f0abeef","Type":"ContainerStarted","Data":"01f7059eb12180eac8ca6b1759ca6b8187bd01b0567eaa6c679d97e8938f71a6"} Feb 16 15:18:15 crc kubenswrapper[4835]: I0216 15:18:15.341933 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-z9xgl" Feb 16 15:18:15 crc kubenswrapper[4835]: I0216 15:18:15.342918 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mnqkl" event={"ID":"74f1c44a-b017-4269-96c0-9dc9359becef","Type":"ContainerStarted","Data":"32c04ed915e0bef8e408615ab6c30962c293438b1fe60cdd6434312e622677da"} Feb 16 15:18:15 crc kubenswrapper[4835]: I0216 15:18:15.356952 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-z9xgl" podStartSLOduration=1.323618718 podStartE2EDuration="4.356937035s" podCreationTimestamp="2026-02-16 15:18:11 +0000 UTC" firstStartedPulling="2026-02-16 15:18:11.662234709 +0000 UTC m=+640.954227604" lastFinishedPulling="2026-02-16 15:18:14.695553026 +0000 UTC m=+643.987545921" observedRunningTime="2026-02-16 15:18:15.356829003 +0000 UTC m=+644.648821938" watchObservedRunningTime="2026-02-16 15:18:15.356937035 +0000 UTC m=+644.648929930" Feb 16 15:18:15 crc kubenswrapper[4835]: I0216 15:18:15.371866 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mnqkl" podStartSLOduration=2.476939324 podStartE2EDuration="5.371842831s" podCreationTimestamp="2026-02-16 15:18:10 +0000 UTC" firstStartedPulling="2026-02-16 15:18:11.793426572 +0000 UTC m=+641.085419467" lastFinishedPulling="2026-02-16 15:18:14.688330079 +0000 UTC m=+643.980322974" observedRunningTime="2026-02-16 15:18:15.369058489 +0000 UTC m=+644.661051394" watchObservedRunningTime="2026-02-16 15:18:15.371842831 +0000 UTC m=+644.663835736" Feb 16 15:18:17 crc kubenswrapper[4835]: I0216 15:18:17.355955 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qhxb8" event={"ID":"845954bd-7996-428a-8b39-9746616e7e1e","Type":"ContainerStarted","Data":"fa6d9bb1cf3b54994a9e41aac6cb98fd960d768d180e8c5239bf27d6d1348e94"} Feb 16 15:18:17 crc kubenswrapper[4835]: I0216 15:18:17.379864 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-qhxb8" podStartSLOduration=2.640484084 podStartE2EDuration="7.379837824s" podCreationTimestamp="2026-02-16 15:18:10 +0000 UTC" firstStartedPulling="2026-02-16 15:18:11.627215843 +0000 UTC m=+640.919208738" lastFinishedPulling="2026-02-16 15:18:16.366569583 +0000 UTC m=+645.658562478" observedRunningTime="2026-02-16 15:18:17.37307882 +0000 UTC m=+646.665071745" watchObservedRunningTime="2026-02-16 15:18:17.379837824 +0000 UTC m=+646.671830719" Feb 16 15:18:21 crc kubenswrapper[4835]: I0216 15:18:21.371669 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-z9xgl" Feb 16 15:18:44 crc kubenswrapper[4835]: I0216 15:18:44.482267 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z"] Feb 16 15:18:44 crc kubenswrapper[4835]: I0216 15:18:44.483823 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" Feb 16 15:18:44 crc kubenswrapper[4835]: I0216 15:18:44.485876 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 15:18:44 crc kubenswrapper[4835]: I0216 15:18:44.492774 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z"] Feb 16 15:18:44 crc kubenswrapper[4835]: I0216 15:18:44.517817 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnh6v\" (UniqueName: \"kubernetes.io/projected/21de0e14-792d-4a0f-9a1b-6303df1eac87-kube-api-access-cnh6v\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z\" (UID: \"21de0e14-792d-4a0f-9a1b-6303df1eac87\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" Feb 16 15:18:44 crc kubenswrapper[4835]: I0216 15:18:44.517904 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21de0e14-792d-4a0f-9a1b-6303df1eac87-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z\" (UID: \"21de0e14-792d-4a0f-9a1b-6303df1eac87\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" Feb 16 15:18:44 crc kubenswrapper[4835]: I0216 15:18:44.517950 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21de0e14-792d-4a0f-9a1b-6303df1eac87-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z\" (UID: \"21de0e14-792d-4a0f-9a1b-6303df1eac87\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" Feb 16 15:18:44 crc kubenswrapper[4835]: I0216 15:18:44.618583 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnh6v\" (UniqueName: \"kubernetes.io/projected/21de0e14-792d-4a0f-9a1b-6303df1eac87-kube-api-access-cnh6v\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z\" (UID: \"21de0e14-792d-4a0f-9a1b-6303df1eac87\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" Feb 16 15:18:44 crc kubenswrapper[4835]: I0216 15:18:44.618640 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21de0e14-792d-4a0f-9a1b-6303df1eac87-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z\" (UID: \"21de0e14-792d-4a0f-9a1b-6303df1eac87\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" Feb 16 15:18:44 crc kubenswrapper[4835]: I0216 15:18:44.618684 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21de0e14-792d-4a0f-9a1b-6303df1eac87-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z\" (UID: \"21de0e14-792d-4a0f-9a1b-6303df1eac87\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" Feb 16 15:18:44 crc kubenswrapper[4835]: I0216 15:18:44.619139 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21de0e14-792d-4a0f-9a1b-6303df1eac87-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z\" (UID: \"21de0e14-792d-4a0f-9a1b-6303df1eac87\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" Feb 16 15:18:44 crc kubenswrapper[4835]: I0216 15:18:44.619668 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21de0e14-792d-4a0f-9a1b-6303df1eac87-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z\" (UID: \"21de0e14-792d-4a0f-9a1b-6303df1eac87\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" Feb 16 15:18:44 crc kubenswrapper[4835]: I0216 15:18:44.636606 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnh6v\" (UniqueName: \"kubernetes.io/projected/21de0e14-792d-4a0f-9a1b-6303df1eac87-kube-api-access-cnh6v\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z\" (UID: \"21de0e14-792d-4a0f-9a1b-6303df1eac87\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" Feb 16 15:18:44 crc kubenswrapper[4835]: I0216 15:18:44.798703 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" Feb 16 15:18:45 crc kubenswrapper[4835]: I0216 15:18:45.221710 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z"] Feb 16 15:18:45 crc kubenswrapper[4835]: I0216 15:18:45.532018 4835 generic.go:334] "Generic (PLEG): container finished" podID="21de0e14-792d-4a0f-9a1b-6303df1eac87" containerID="3e0301bcdc1111630fa3b7b3f8b3f3dd8acb7429dd8e6b38a30e886de20b556e" exitCode=0 Feb 16 15:18:45 crc kubenswrapper[4835]: I0216 15:18:45.532130 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" event={"ID":"21de0e14-792d-4a0f-9a1b-6303df1eac87","Type":"ContainerDied","Data":"3e0301bcdc1111630fa3b7b3f8b3f3dd8acb7429dd8e6b38a30e886de20b556e"} Feb 16 15:18:45 crc kubenswrapper[4835]: I0216 15:18:45.532308 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" event={"ID":"21de0e14-792d-4a0f-9a1b-6303df1eac87","Type":"ContainerStarted","Data":"055c1ac4cfd5756802be86d93e17a1838d65eb6128715d5a853283ace617f1fc"} Feb 16 15:18:46 crc kubenswrapper[4835]: I0216 15:18:46.145748 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 16 15:18:46 crc kubenswrapper[4835]: I0216 15:18:46.147058 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 15:18:46 crc kubenswrapper[4835]: I0216 15:18:46.148995 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 16 15:18:46 crc kubenswrapper[4835]: I0216 15:18:46.149240 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 16 15:18:46 crc kubenswrapper[4835]: I0216 15:18:46.149385 4835 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-qqlh6" Feb 16 15:18:46 crc kubenswrapper[4835]: I0216 15:18:46.157405 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 15:18:46 crc kubenswrapper[4835]: I0216 15:18:46.335943 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvbgz\" (UniqueName: \"kubernetes.io/projected/bcd0ca85-9690-4893-95dd-f8a07a076a33-kube-api-access-hvbgz\") pod \"minio\" (UID: \"bcd0ca85-9690-4893-95dd-f8a07a076a33\") " pod="minio-dev/minio" Feb 16 15:18:46 crc kubenswrapper[4835]: I0216 15:18:46.336097 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b0d0d1b8-d491-401e-84df-8d8cf445bcb2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0d0d1b8-d491-401e-84df-8d8cf445bcb2\") pod \"minio\" (UID: \"bcd0ca85-9690-4893-95dd-f8a07a076a33\") " pod="minio-dev/minio" Feb 16 15:18:46 crc kubenswrapper[4835]: I0216 15:18:46.437766 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvbgz\" (UniqueName: \"kubernetes.io/projected/bcd0ca85-9690-4893-95dd-f8a07a076a33-kube-api-access-hvbgz\") pod \"minio\" (UID: \"bcd0ca85-9690-4893-95dd-f8a07a076a33\") " pod="minio-dev/minio" Feb 16 15:18:46 crc kubenswrapper[4835]: I0216 15:18:46.437848 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b0d0d1b8-d491-401e-84df-8d8cf445bcb2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0d0d1b8-d491-401e-84df-8d8cf445bcb2\") pod \"minio\" (UID: \"bcd0ca85-9690-4893-95dd-f8a07a076a33\") " pod="minio-dev/minio" Feb 16 15:18:46 crc kubenswrapper[4835]: I0216 15:18:46.441011 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:18:46 crc kubenswrapper[4835]: I0216 15:18:46.441049 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b0d0d1b8-d491-401e-84df-8d8cf445bcb2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0d0d1b8-d491-401e-84df-8d8cf445bcb2\") pod \"minio\" (UID: \"bcd0ca85-9690-4893-95dd-f8a07a076a33\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/46f43e7939f04d7a1426c8c61999d23bd7d834fea1d436f35869bec4d6d1a22a/globalmount\"" pod="minio-dev/minio" Feb 16 15:18:46 crc kubenswrapper[4835]: I0216 15:18:46.469756 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvbgz\" (UniqueName: \"kubernetes.io/projected/bcd0ca85-9690-4893-95dd-f8a07a076a33-kube-api-access-hvbgz\") pod \"minio\" (UID: \"bcd0ca85-9690-4893-95dd-f8a07a076a33\") " pod="minio-dev/minio" Feb 16 15:18:46 crc kubenswrapper[4835]: I0216 15:18:46.473679 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b0d0d1b8-d491-401e-84df-8d8cf445bcb2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0d0d1b8-d491-401e-84df-8d8cf445bcb2\") pod \"minio\" (UID: \"bcd0ca85-9690-4893-95dd-f8a07a076a33\") " pod="minio-dev/minio" Feb 16 15:18:46 crc kubenswrapper[4835]: I0216 15:18:46.514081 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 15:18:46 crc kubenswrapper[4835]: I0216 15:18:46.724594 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 15:18:46 crc kubenswrapper[4835]: W0216 15:18:46.742750 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcd0ca85_9690_4893_95dd_f8a07a076a33.slice/crio-1cba2c6d7820e163274b6f64de687d441e82e739f45f48300aadf53e642962d5 WatchSource:0}: Error finding container 1cba2c6d7820e163274b6f64de687d441e82e739f45f48300aadf53e642962d5: Status 404 returned error can't find the container with id 1cba2c6d7820e163274b6f64de687d441e82e739f45f48300aadf53e642962d5 Feb 16 15:18:47 crc kubenswrapper[4835]: I0216 15:18:47.546916 4835 generic.go:334] "Generic (PLEG): container finished" podID="21de0e14-792d-4a0f-9a1b-6303df1eac87" containerID="cfb655173743ea10d180cedceebd6e9e01295e67d0ab69237d8676f40e3447f1" exitCode=0 Feb 16 15:18:47 crc kubenswrapper[4835]: I0216 15:18:47.547008 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" event={"ID":"21de0e14-792d-4a0f-9a1b-6303df1eac87","Type":"ContainerDied","Data":"cfb655173743ea10d180cedceebd6e9e01295e67d0ab69237d8676f40e3447f1"} Feb 16 15:18:47 crc kubenswrapper[4835]: I0216 15:18:47.549452 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"bcd0ca85-9690-4893-95dd-f8a07a076a33","Type":"ContainerStarted","Data":"1cba2c6d7820e163274b6f64de687d441e82e739f45f48300aadf53e642962d5"} Feb 16 15:18:48 crc kubenswrapper[4835]: I0216 15:18:48.557999 4835 generic.go:334] "Generic (PLEG): container finished" podID="21de0e14-792d-4a0f-9a1b-6303df1eac87" containerID="3cb57215674accdf3cc604253814f8eed0b5f58fe8a4e4212887223b8f7a9a2f" exitCode=0 Feb 16 15:18:48 crc kubenswrapper[4835]: I0216 15:18:48.558196 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" event={"ID":"21de0e14-792d-4a0f-9a1b-6303df1eac87","Type":"ContainerDied","Data":"3cb57215674accdf3cc604253814f8eed0b5f58fe8a4e4212887223b8f7a9a2f"} Feb 16 15:18:49 crc kubenswrapper[4835]: I0216 15:18:49.575734 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"bcd0ca85-9690-4893-95dd-f8a07a076a33","Type":"ContainerStarted","Data":"5cecfb1536be4d4a2905f2071be8a9656f69147b69ff60ebf877737282b88196"} Feb 16 15:18:49 crc kubenswrapper[4835]: I0216 15:18:49.604349 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=2.951451119 podStartE2EDuration="5.604290716s" podCreationTimestamp="2026-02-16 15:18:44 +0000 UTC" firstStartedPulling="2026-02-16 15:18:46.745435508 +0000 UTC m=+676.037428403" lastFinishedPulling="2026-02-16 15:18:49.398275105 +0000 UTC m=+678.690268000" observedRunningTime="2026-02-16 15:18:49.595098118 +0000 UTC m=+678.887091033" watchObservedRunningTime="2026-02-16 15:18:49.604290716 +0000 UTC m=+678.896283631" Feb 16 15:18:49 crc kubenswrapper[4835]: I0216 15:18:49.847062 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" Feb 16 15:18:50 crc kubenswrapper[4835]: I0216 15:18:50.020171 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21de0e14-792d-4a0f-9a1b-6303df1eac87-util\") pod \"21de0e14-792d-4a0f-9a1b-6303df1eac87\" (UID: \"21de0e14-792d-4a0f-9a1b-6303df1eac87\") " Feb 16 15:18:50 crc kubenswrapper[4835]: I0216 15:18:50.020288 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21de0e14-792d-4a0f-9a1b-6303df1eac87-bundle\") pod \"21de0e14-792d-4a0f-9a1b-6303df1eac87\" (UID: \"21de0e14-792d-4a0f-9a1b-6303df1eac87\") " Feb 16 15:18:50 crc kubenswrapper[4835]: I0216 15:18:50.020340 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnh6v\" (UniqueName: \"kubernetes.io/projected/21de0e14-792d-4a0f-9a1b-6303df1eac87-kube-api-access-cnh6v\") pod \"21de0e14-792d-4a0f-9a1b-6303df1eac87\" (UID: \"21de0e14-792d-4a0f-9a1b-6303df1eac87\") " Feb 16 15:18:50 crc kubenswrapper[4835]: I0216 15:18:50.022240 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21de0e14-792d-4a0f-9a1b-6303df1eac87-bundle" (OuterVolumeSpecName: "bundle") pod "21de0e14-792d-4a0f-9a1b-6303df1eac87" (UID: "21de0e14-792d-4a0f-9a1b-6303df1eac87"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:18:50 crc kubenswrapper[4835]: I0216 15:18:50.026907 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21de0e14-792d-4a0f-9a1b-6303df1eac87-kube-api-access-cnh6v" (OuterVolumeSpecName: "kube-api-access-cnh6v") pod "21de0e14-792d-4a0f-9a1b-6303df1eac87" (UID: "21de0e14-792d-4a0f-9a1b-6303df1eac87"). InnerVolumeSpecName "kube-api-access-cnh6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:18:50 crc kubenswrapper[4835]: I0216 15:18:50.038332 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21de0e14-792d-4a0f-9a1b-6303df1eac87-util" (OuterVolumeSpecName: "util") pod "21de0e14-792d-4a0f-9a1b-6303df1eac87" (UID: "21de0e14-792d-4a0f-9a1b-6303df1eac87"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:18:50 crc kubenswrapper[4835]: I0216 15:18:50.121910 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21de0e14-792d-4a0f-9a1b-6303df1eac87-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:50 crc kubenswrapper[4835]: I0216 15:18:50.121942 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21de0e14-792d-4a0f-9a1b-6303df1eac87-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:50 crc kubenswrapper[4835]: I0216 15:18:50.121957 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnh6v\" (UniqueName: \"kubernetes.io/projected/21de0e14-792d-4a0f-9a1b-6303df1eac87-kube-api-access-cnh6v\") on node \"crc\" DevicePath \"\"" Feb 16 15:18:50 crc kubenswrapper[4835]: I0216 15:18:50.583378 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" Feb 16 15:18:50 crc kubenswrapper[4835]: I0216 15:18:50.584596 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z" event={"ID":"21de0e14-792d-4a0f-9a1b-6303df1eac87","Type":"ContainerDied","Data":"055c1ac4cfd5756802be86d93e17a1838d65eb6128715d5a853283ace617f1fc"} Feb 16 15:18:50 crc kubenswrapper[4835]: I0216 15:18:50.584628 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="055c1ac4cfd5756802be86d93e17a1838d65eb6128715d5a853283ace617f1fc" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.498720 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm"] Feb 16 15:18:56 crc kubenswrapper[4835]: E0216 15:18:56.499421 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21de0e14-792d-4a0f-9a1b-6303df1eac87" containerName="extract" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.499432 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="21de0e14-792d-4a0f-9a1b-6303df1eac87" containerName="extract" Feb 16 15:18:56 crc kubenswrapper[4835]: E0216 15:18:56.499444 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21de0e14-792d-4a0f-9a1b-6303df1eac87" containerName="pull" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.499450 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="21de0e14-792d-4a0f-9a1b-6303df1eac87" containerName="pull" Feb 16 15:18:56 crc kubenswrapper[4835]: E0216 15:18:56.499460 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21de0e14-792d-4a0f-9a1b-6303df1eac87" containerName="util" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.499467 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="21de0e14-792d-4a0f-9a1b-6303df1eac87" containerName="util" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.499575 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="21de0e14-792d-4a0f-9a1b-6303df1eac87" containerName="extract" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.500112 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.509939 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.510025 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.510125 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.510191 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-7zlcf" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.510252 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.510953 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.518193 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm"] Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.632587 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ecd1cee6-74c4-493f-9a48-c5e5cecb7cde-webhook-cert\") pod \"loki-operator-controller-manager-5b666486b-bbgfm\" (UID: \"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.632849 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ecd1cee6-74c4-493f-9a48-c5e5cecb7cde-manager-config\") pod \"loki-operator-controller-manager-5b666486b-bbgfm\" (UID: \"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.632973 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ecd1cee6-74c4-493f-9a48-c5e5cecb7cde-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5b666486b-bbgfm\" (UID: \"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.633145 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfvhq\" (UniqueName: \"kubernetes.io/projected/ecd1cee6-74c4-493f-9a48-c5e5cecb7cde-kube-api-access-kfvhq\") pod \"loki-operator-controller-manager-5b666486b-bbgfm\" (UID: \"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.633254 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ecd1cee6-74c4-493f-9a48-c5e5cecb7cde-apiservice-cert\") pod \"loki-operator-controller-manager-5b666486b-bbgfm\" (UID: \"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.734159 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ecd1cee6-74c4-493f-9a48-c5e5cecb7cde-apiservice-cert\") pod \"loki-operator-controller-manager-5b666486b-bbgfm\" (UID: \"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.734235 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ecd1cee6-74c4-493f-9a48-c5e5cecb7cde-webhook-cert\") pod \"loki-operator-controller-manager-5b666486b-bbgfm\" (UID: \"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.734275 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ecd1cee6-74c4-493f-9a48-c5e5cecb7cde-manager-config\") pod \"loki-operator-controller-manager-5b666486b-bbgfm\" (UID: \"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.734331 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ecd1cee6-74c4-493f-9a48-c5e5cecb7cde-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5b666486b-bbgfm\" (UID: \"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.734371 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfvhq\" (UniqueName: \"kubernetes.io/projected/ecd1cee6-74c4-493f-9a48-c5e5cecb7cde-kube-api-access-kfvhq\") pod \"loki-operator-controller-manager-5b666486b-bbgfm\" (UID: \"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.735213 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/ecd1cee6-74c4-493f-9a48-c5e5cecb7cde-manager-config\") pod \"loki-operator-controller-manager-5b666486b-bbgfm\" (UID: \"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.740305 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ecd1cee6-74c4-493f-9a48-c5e5cecb7cde-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5b666486b-bbgfm\" (UID: \"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.742353 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ecd1cee6-74c4-493f-9a48-c5e5cecb7cde-webhook-cert\") pod \"loki-operator-controller-manager-5b666486b-bbgfm\" (UID: \"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.744383 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ecd1cee6-74c4-493f-9a48-c5e5cecb7cde-apiservice-cert\") pod \"loki-operator-controller-manager-5b666486b-bbgfm\" (UID: \"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.754982 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfvhq\" (UniqueName: \"kubernetes.io/projected/ecd1cee6-74c4-493f-9a48-c5e5cecb7cde-kube-api-access-kfvhq\") pod \"loki-operator-controller-manager-5b666486b-bbgfm\" (UID: \"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:18:56 crc kubenswrapper[4835]: I0216 15:18:56.820697 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:18:57 crc kubenswrapper[4835]: I0216 15:18:57.279056 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm"] Feb 16 15:18:57 crc kubenswrapper[4835]: W0216 15:18:57.285773 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecd1cee6_74c4_493f_9a48_c5e5cecb7cde.slice/crio-52bf4e902aa40c5aec5502942352acf23f277343f15fdb0ca6d0848e8900a715 WatchSource:0}: Error finding container 52bf4e902aa40c5aec5502942352acf23f277343f15fdb0ca6d0848e8900a715: Status 404 returned error can't find the container with id 52bf4e902aa40c5aec5502942352acf23f277343f15fdb0ca6d0848e8900a715 Feb 16 15:18:57 crc kubenswrapper[4835]: I0216 15:18:57.623067 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" event={"ID":"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde","Type":"ContainerStarted","Data":"52bf4e902aa40c5aec5502942352acf23f277343f15fdb0ca6d0848e8900a715"} Feb 16 15:19:02 crc kubenswrapper[4835]: I0216 15:19:02.664295 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" event={"ID":"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde","Type":"ContainerStarted","Data":"72cb28e805388576816a608460a32d3f122cc725f4b01a16d2a5a6344ebc8f9c"} Feb 16 15:19:07 crc kubenswrapper[4835]: I0216 15:19:07.690123 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" event={"ID":"ecd1cee6-74c4-493f-9a48-c5e5cecb7cde","Type":"ContainerStarted","Data":"008dec66dd2fd2b1eed4b71736e2f855c1d1021b2eca2af637cff16779e85987"} Feb 16 15:19:07 crc kubenswrapper[4835]: I0216 15:19:07.690779 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:19:07 crc kubenswrapper[4835]: I0216 15:19:07.692479 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" Feb 16 15:19:07 crc kubenswrapper[4835]: I0216 15:19:07.710746 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5b666486b-bbgfm" podStartSLOduration=1.634540963 podStartE2EDuration="11.710728795s" podCreationTimestamp="2026-02-16 15:18:56 +0000 UTC" firstStartedPulling="2026-02-16 15:18:57.28903897 +0000 UTC m=+686.581031875" lastFinishedPulling="2026-02-16 15:19:07.365226812 +0000 UTC m=+696.657219707" observedRunningTime="2026-02-16 15:19:07.706049464 +0000 UTC m=+696.998042419" watchObservedRunningTime="2026-02-16 15:19:07.710728795 +0000 UTC m=+697.002721690" Feb 16 15:19:18 crc kubenswrapper[4835]: I0216 15:19:18.586803 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:19:18 crc kubenswrapper[4835]: I0216 15:19:18.587422 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:19:30 crc kubenswrapper[4835]: I0216 15:19:30.838165 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s"] Feb 16 15:19:30 crc kubenswrapper[4835]: I0216 15:19:30.840132 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" Feb 16 15:19:30 crc kubenswrapper[4835]: I0216 15:19:30.842157 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 15:19:30 crc kubenswrapper[4835]: I0216 15:19:30.844642 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s"] Feb 16 15:19:30 crc kubenswrapper[4835]: I0216 15:19:30.983384 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17e06f1c-269d-46fe-aec8-1791239a585a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s\" (UID: \"17e06f1c-269d-46fe-aec8-1791239a585a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" Feb 16 15:19:30 crc kubenswrapper[4835]: I0216 15:19:30.983436 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17e06f1c-269d-46fe-aec8-1791239a585a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s\" (UID: \"17e06f1c-269d-46fe-aec8-1791239a585a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" Feb 16 15:19:30 crc kubenswrapper[4835]: I0216 15:19:30.983479 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wf7j\" (UniqueName: \"kubernetes.io/projected/17e06f1c-269d-46fe-aec8-1791239a585a-kube-api-access-7wf7j\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s\" (UID: \"17e06f1c-269d-46fe-aec8-1791239a585a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" Feb 16 15:19:31 crc kubenswrapper[4835]: I0216 15:19:31.084852 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wf7j\" (UniqueName: \"kubernetes.io/projected/17e06f1c-269d-46fe-aec8-1791239a585a-kube-api-access-7wf7j\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s\" (UID: \"17e06f1c-269d-46fe-aec8-1791239a585a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" Feb 16 15:19:31 crc kubenswrapper[4835]: I0216 15:19:31.084952 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17e06f1c-269d-46fe-aec8-1791239a585a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s\" (UID: \"17e06f1c-269d-46fe-aec8-1791239a585a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" Feb 16 15:19:31 crc kubenswrapper[4835]: I0216 15:19:31.084985 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17e06f1c-269d-46fe-aec8-1791239a585a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s\" (UID: \"17e06f1c-269d-46fe-aec8-1791239a585a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" Feb 16 15:19:31 crc kubenswrapper[4835]: I0216 15:19:31.085490 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17e06f1c-269d-46fe-aec8-1791239a585a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s\" (UID: \"17e06f1c-269d-46fe-aec8-1791239a585a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" Feb 16 15:19:31 crc kubenswrapper[4835]: I0216 15:19:31.085680 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17e06f1c-269d-46fe-aec8-1791239a585a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s\" (UID: \"17e06f1c-269d-46fe-aec8-1791239a585a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" Feb 16 15:19:31 crc kubenswrapper[4835]: I0216 15:19:31.105307 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wf7j\" (UniqueName: \"kubernetes.io/projected/17e06f1c-269d-46fe-aec8-1791239a585a-kube-api-access-7wf7j\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s\" (UID: \"17e06f1c-269d-46fe-aec8-1791239a585a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" Feb 16 15:19:31 crc kubenswrapper[4835]: I0216 15:19:31.194316 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" Feb 16 15:19:31 crc kubenswrapper[4835]: I0216 15:19:31.502901 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s"] Feb 16 15:19:31 crc kubenswrapper[4835]: I0216 15:19:31.845923 4835 generic.go:334] "Generic (PLEG): container finished" podID="17e06f1c-269d-46fe-aec8-1791239a585a" containerID="46174914241d082f044558612ed89e4f4a2485b3f9fe8bf82505df92cc13a614" exitCode=0 Feb 16 15:19:31 crc kubenswrapper[4835]: I0216 15:19:31.845961 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" event={"ID":"17e06f1c-269d-46fe-aec8-1791239a585a","Type":"ContainerDied","Data":"46174914241d082f044558612ed89e4f4a2485b3f9fe8bf82505df92cc13a614"} Feb 16 15:19:31 crc kubenswrapper[4835]: I0216 15:19:31.846011 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" event={"ID":"17e06f1c-269d-46fe-aec8-1791239a585a","Type":"ContainerStarted","Data":"1dff6142757700aede2f6c4ab11f8f4064127fc6ac44fe741e6063192bf21547"} Feb 16 15:19:33 crc kubenswrapper[4835]: I0216 15:19:33.869617 4835 generic.go:334] "Generic (PLEG): container finished" podID="17e06f1c-269d-46fe-aec8-1791239a585a" containerID="3f0d91a2532066333d694082ca61b09909086129715dddd068d173bde256c69c" exitCode=0 Feb 16 15:19:33 crc kubenswrapper[4835]: I0216 15:19:33.870263 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" event={"ID":"17e06f1c-269d-46fe-aec8-1791239a585a","Type":"ContainerDied","Data":"3f0d91a2532066333d694082ca61b09909086129715dddd068d173bde256c69c"} Feb 16 15:19:34 crc kubenswrapper[4835]: I0216 15:19:34.878125 4835 generic.go:334] "Generic (PLEG): container finished" podID="17e06f1c-269d-46fe-aec8-1791239a585a" containerID="4d6abe619e2f8c51344570ee39b131b68a164a07984d5cd742e1a1a7549f9e9a" exitCode=0 Feb 16 15:19:34 crc kubenswrapper[4835]: I0216 15:19:34.878165 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" event={"ID":"17e06f1c-269d-46fe-aec8-1791239a585a","Type":"ContainerDied","Data":"4d6abe619e2f8c51344570ee39b131b68a164a07984d5cd742e1a1a7549f9e9a"} Feb 16 15:19:36 crc kubenswrapper[4835]: I0216 15:19:36.142433 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" Feb 16 15:19:36 crc kubenswrapper[4835]: I0216 15:19:36.175627 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17e06f1c-269d-46fe-aec8-1791239a585a-util\") pod \"17e06f1c-269d-46fe-aec8-1791239a585a\" (UID: \"17e06f1c-269d-46fe-aec8-1791239a585a\") " Feb 16 15:19:36 crc kubenswrapper[4835]: I0216 15:19:36.175686 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wf7j\" (UniqueName: \"kubernetes.io/projected/17e06f1c-269d-46fe-aec8-1791239a585a-kube-api-access-7wf7j\") pod \"17e06f1c-269d-46fe-aec8-1791239a585a\" (UID: \"17e06f1c-269d-46fe-aec8-1791239a585a\") " Feb 16 15:19:36 crc kubenswrapper[4835]: I0216 15:19:36.175830 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17e06f1c-269d-46fe-aec8-1791239a585a-bundle\") pod \"17e06f1c-269d-46fe-aec8-1791239a585a\" (UID: \"17e06f1c-269d-46fe-aec8-1791239a585a\") " Feb 16 15:19:36 crc kubenswrapper[4835]: I0216 15:19:36.177028 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e06f1c-269d-46fe-aec8-1791239a585a-bundle" (OuterVolumeSpecName: "bundle") pod "17e06f1c-269d-46fe-aec8-1791239a585a" (UID: "17e06f1c-269d-46fe-aec8-1791239a585a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:36 crc kubenswrapper[4835]: I0216 15:19:36.184955 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e06f1c-269d-46fe-aec8-1791239a585a-kube-api-access-7wf7j" (OuterVolumeSpecName: "kube-api-access-7wf7j") pod "17e06f1c-269d-46fe-aec8-1791239a585a" (UID: "17e06f1c-269d-46fe-aec8-1791239a585a"). InnerVolumeSpecName "kube-api-access-7wf7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:19:36 crc kubenswrapper[4835]: I0216 15:19:36.190140 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e06f1c-269d-46fe-aec8-1791239a585a-util" (OuterVolumeSpecName: "util") pod "17e06f1c-269d-46fe-aec8-1791239a585a" (UID: "17e06f1c-269d-46fe-aec8-1791239a585a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:19:36 crc kubenswrapper[4835]: I0216 15:19:36.276859 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17e06f1c-269d-46fe-aec8-1791239a585a-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:36 crc kubenswrapper[4835]: I0216 15:19:36.276896 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17e06f1c-269d-46fe-aec8-1791239a585a-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:36 crc kubenswrapper[4835]: I0216 15:19:36.276911 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wf7j\" (UniqueName: \"kubernetes.io/projected/17e06f1c-269d-46fe-aec8-1791239a585a-kube-api-access-7wf7j\") on node \"crc\" DevicePath \"\"" Feb 16 15:19:36 crc kubenswrapper[4835]: I0216 15:19:36.891960 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" event={"ID":"17e06f1c-269d-46fe-aec8-1791239a585a","Type":"ContainerDied","Data":"1dff6142757700aede2f6c4ab11f8f4064127fc6ac44fe741e6063192bf21547"} Feb 16 15:19:36 crc kubenswrapper[4835]: I0216 15:19:36.892018 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s" Feb 16 15:19:36 crc kubenswrapper[4835]: I0216 15:19:36.892017 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dff6142757700aede2f6c4ab11f8f4064127fc6ac44fe741e6063192bf21547" Feb 16 15:19:42 crc kubenswrapper[4835]: I0216 15:19:42.500157 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-ndjz7"] Feb 16 15:19:42 crc kubenswrapper[4835]: E0216 15:19:42.500972 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e06f1c-269d-46fe-aec8-1791239a585a" containerName="extract" Feb 16 15:19:42 crc kubenswrapper[4835]: I0216 15:19:42.500985 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e06f1c-269d-46fe-aec8-1791239a585a" containerName="extract" Feb 16 15:19:42 crc kubenswrapper[4835]: E0216 15:19:42.500995 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e06f1c-269d-46fe-aec8-1791239a585a" containerName="util" Feb 16 15:19:42 crc kubenswrapper[4835]: I0216 15:19:42.501019 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e06f1c-269d-46fe-aec8-1791239a585a" containerName="util" Feb 16 15:19:42 crc kubenswrapper[4835]: E0216 15:19:42.501029 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e06f1c-269d-46fe-aec8-1791239a585a" containerName="pull" Feb 16 15:19:42 crc kubenswrapper[4835]: I0216 15:19:42.501035 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e06f1c-269d-46fe-aec8-1791239a585a" containerName="pull" Feb 16 15:19:42 crc kubenswrapper[4835]: I0216 15:19:42.501150 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e06f1c-269d-46fe-aec8-1791239a585a" containerName="extract" Feb 16 15:19:42 crc kubenswrapper[4835]: I0216 15:19:42.501652 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-ndjz7" Feb 16 15:19:42 crc kubenswrapper[4835]: I0216 15:19:42.503383 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 15:19:42 crc kubenswrapper[4835]: I0216 15:19:42.503485 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-lzvn8" Feb 16 15:19:42 crc kubenswrapper[4835]: I0216 15:19:42.503750 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 15:19:42 crc kubenswrapper[4835]: I0216 15:19:42.511240 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-ndjz7"] Feb 16 15:19:42 crc kubenswrapper[4835]: I0216 15:19:42.551784 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqslp\" (UniqueName: \"kubernetes.io/projected/c98c3fc0-ce33-4bef-9089-1dd9da5100a1-kube-api-access-bqslp\") pod \"nmstate-operator-694c9596b7-ndjz7\" (UID: \"c98c3fc0-ce33-4bef-9089-1dd9da5100a1\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-ndjz7" Feb 16 15:19:42 crc kubenswrapper[4835]: I0216 15:19:42.652678 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqslp\" (UniqueName: \"kubernetes.io/projected/c98c3fc0-ce33-4bef-9089-1dd9da5100a1-kube-api-access-bqslp\") pod \"nmstate-operator-694c9596b7-ndjz7\" (UID: \"c98c3fc0-ce33-4bef-9089-1dd9da5100a1\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-ndjz7" Feb 16 15:19:42 crc kubenswrapper[4835]: I0216 15:19:42.672398 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqslp\" (UniqueName: \"kubernetes.io/projected/c98c3fc0-ce33-4bef-9089-1dd9da5100a1-kube-api-access-bqslp\") pod \"nmstate-operator-694c9596b7-ndjz7\" (UID: \"c98c3fc0-ce33-4bef-9089-1dd9da5100a1\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-ndjz7" Feb 16 15:19:42 crc kubenswrapper[4835]: I0216 15:19:42.818566 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-ndjz7" Feb 16 15:19:43 crc kubenswrapper[4835]: I0216 15:19:43.078143 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-ndjz7"] Feb 16 15:19:43 crc kubenswrapper[4835]: I0216 15:19:43.940645 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-ndjz7" event={"ID":"c98c3fc0-ce33-4bef-9089-1dd9da5100a1","Type":"ContainerStarted","Data":"68bd8916e1cf4b8670624651334baa108c6d62f4e399cbc38230ac9fd99e4e46"} Feb 16 15:19:45 crc kubenswrapper[4835]: I0216 15:19:45.952212 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-ndjz7" event={"ID":"c98c3fc0-ce33-4bef-9089-1dd9da5100a1","Type":"ContainerStarted","Data":"bcbc6a96147a30d600561fb5e492b799bcd9ca0385ba4113dcd128ac6b08520b"} Feb 16 15:19:45 crc kubenswrapper[4835]: I0216 15:19:45.967582 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-ndjz7" podStartSLOduration=1.489185567 podStartE2EDuration="3.967563493s" podCreationTimestamp="2026-02-16 15:19:42 +0000 UTC" firstStartedPulling="2026-02-16 15:19:43.092283371 +0000 UTC m=+732.384276266" lastFinishedPulling="2026-02-16 15:19:45.570661297 +0000 UTC m=+734.862654192" observedRunningTime="2026-02-16 15:19:45.964196916 +0000 UTC m=+735.256189821" watchObservedRunningTime="2026-02-16 15:19:45.967563493 +0000 UTC m=+735.259556388" Feb 16 15:19:48 crc kubenswrapper[4835]: I0216 15:19:48.586426 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:19:48 crc kubenswrapper[4835]: I0216 15:19:48.587438 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:19:51 crc kubenswrapper[4835]: I0216 15:19:51.924335 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-gx9lk"] Feb 16 15:19:51 crc kubenswrapper[4835]: I0216 15:19:51.926968 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gx9lk" Feb 16 15:19:51 crc kubenswrapper[4835]: I0216 15:19:51.934556 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-7tpvd" Feb 16 15:19:51 crc kubenswrapper[4835]: I0216 15:19:51.949640 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-p8bnb"] Feb 16 15:19:51 crc kubenswrapper[4835]: I0216 15:19:51.952355 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-p8bnb" Feb 16 15:19:51 crc kubenswrapper[4835]: I0216 15:19:51.960191 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-gx9lk"] Feb 16 15:19:51 crc kubenswrapper[4835]: I0216 15:19:51.965296 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-p8bnb"] Feb 16 15:19:51 crc kubenswrapper[4835]: I0216 15:19:51.972296 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-bm4wc"] Feb 16 15:19:51 crc kubenswrapper[4835]: I0216 15:19:51.973447 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bm4wc" Feb 16 15:19:51 crc kubenswrapper[4835]: I0216 15:19:51.974413 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c5nb\" (UniqueName: \"kubernetes.io/projected/0d8011f2-2374-49ae-8d2c-f86f2754d7c3-kube-api-access-4c5nb\") pod \"nmstate-webhook-866bcb46dc-p8bnb\" (UID: \"0d8011f2-2374-49ae-8d2c-f86f2754d7c3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-p8bnb" Feb 16 15:19:51 crc kubenswrapper[4835]: I0216 15:19:51.974480 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0d8011f2-2374-49ae-8d2c-f86f2754d7c3-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-p8bnb\" (UID: \"0d8011f2-2374-49ae-8d2c-f86f2754d7c3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-p8bnb" Feb 16 15:19:51 crc kubenswrapper[4835]: I0216 15:19:51.974553 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kblc\" (UniqueName: \"kubernetes.io/projected/9d8159c8-4992-44d0-a520-113c6b5a6d15-kube-api-access-7kblc\") pod \"nmstate-metrics-58c85c668d-gx9lk\" (UID: \"9d8159c8-4992-44d0-a520-113c6b5a6d15\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-gx9lk" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.000585 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.075767 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c5nb\" (UniqueName: \"kubernetes.io/projected/0d8011f2-2374-49ae-8d2c-f86f2754d7c3-kube-api-access-4c5nb\") pod \"nmstate-webhook-866bcb46dc-p8bnb\" (UID: \"0d8011f2-2374-49ae-8d2c-f86f2754d7c3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-p8bnb" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.075824 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/96d2b8c3-0eac-405b-a1a9-81230ea1a601-dbus-socket\") pod \"nmstate-handler-bm4wc\" (UID: \"96d2b8c3-0eac-405b-a1a9-81230ea1a601\") " pod="openshift-nmstate/nmstate-handler-bm4wc" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.075881 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0d8011f2-2374-49ae-8d2c-f86f2754d7c3-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-p8bnb\" (UID: \"0d8011f2-2374-49ae-8d2c-f86f2754d7c3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-p8bnb" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.075907 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/96d2b8c3-0eac-405b-a1a9-81230ea1a601-ovs-socket\") pod \"nmstate-handler-bm4wc\" (UID: \"96d2b8c3-0eac-405b-a1a9-81230ea1a601\") " pod="openshift-nmstate/nmstate-handler-bm4wc" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.076003 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/96d2b8c3-0eac-405b-a1a9-81230ea1a601-nmstate-lock\") pod \"nmstate-handler-bm4wc\" (UID: \"96d2b8c3-0eac-405b-a1a9-81230ea1a601\") " pod="openshift-nmstate/nmstate-handler-bm4wc" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.076034 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4qvm\" (UniqueName: \"kubernetes.io/projected/96d2b8c3-0eac-405b-a1a9-81230ea1a601-kube-api-access-n4qvm\") pod \"nmstate-handler-bm4wc\" (UID: \"96d2b8c3-0eac-405b-a1a9-81230ea1a601\") " pod="openshift-nmstate/nmstate-handler-bm4wc" Feb 16 15:19:52 crc kubenswrapper[4835]: E0216 15:19:52.076039 4835 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.076054 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kblc\" (UniqueName: \"kubernetes.io/projected/9d8159c8-4992-44d0-a520-113c6b5a6d15-kube-api-access-7kblc\") pod \"nmstate-metrics-58c85c668d-gx9lk\" (UID: \"9d8159c8-4992-44d0-a520-113c6b5a6d15\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-gx9lk" Feb 16 15:19:52 crc kubenswrapper[4835]: E0216 15:19:52.076088 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d8011f2-2374-49ae-8d2c-f86f2754d7c3-tls-key-pair podName:0d8011f2-2374-49ae-8d2c-f86f2754d7c3 nodeName:}" failed. No retries permitted until 2026-02-16 15:19:52.576068897 +0000 UTC m=+741.868061792 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/0d8011f2-2374-49ae-8d2c-f86f2754d7c3-tls-key-pair") pod "nmstate-webhook-866bcb46dc-p8bnb" (UID: "0d8011f2-2374-49ae-8d2c-f86f2754d7c3") : secret "openshift-nmstate-webhook" not found Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.082838 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zd7d"] Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.083948 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zd7d" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.086368 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.086716 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.086905 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zcqkk" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.101662 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kblc\" (UniqueName: \"kubernetes.io/projected/9d8159c8-4992-44d0-a520-113c6b5a6d15-kube-api-access-7kblc\") pod \"nmstate-metrics-58c85c668d-gx9lk\" (UID: \"9d8159c8-4992-44d0-a520-113c6b5a6d15\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-gx9lk" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.109090 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zd7d"] Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.127180 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c5nb\" (UniqueName: \"kubernetes.io/projected/0d8011f2-2374-49ae-8d2c-f86f2754d7c3-kube-api-access-4c5nb\") pod \"nmstate-webhook-866bcb46dc-p8bnb\" (UID: \"0d8011f2-2374-49ae-8d2c-f86f2754d7c3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-p8bnb" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.176835 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4dfj\" (UniqueName: \"kubernetes.io/projected/7e4599b8-b807-4433-bd32-b134460e028a-kube-api-access-b4dfj\") pod \"nmstate-console-plugin-5c78fc5d65-5zd7d\" (UID: \"7e4599b8-b807-4433-bd32-b134460e028a\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zd7d" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.177199 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/96d2b8c3-0eac-405b-a1a9-81230ea1a601-dbus-socket\") pod \"nmstate-handler-bm4wc\" (UID: \"96d2b8c3-0eac-405b-a1a9-81230ea1a601\") " pod="openshift-nmstate/nmstate-handler-bm4wc" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.177328 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e4599b8-b807-4433-bd32-b134460e028a-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-5zd7d\" (UID: \"7e4599b8-b807-4433-bd32-b134460e028a\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zd7d" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.177446 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7e4599b8-b807-4433-bd32-b134460e028a-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-5zd7d\" (UID: \"7e4599b8-b807-4433-bd32-b134460e028a\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zd7d" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.177542 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/96d2b8c3-0eac-405b-a1a9-81230ea1a601-dbus-socket\") pod \"nmstate-handler-bm4wc\" (UID: \"96d2b8c3-0eac-405b-a1a9-81230ea1a601\") " pod="openshift-nmstate/nmstate-handler-bm4wc" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.177711 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/96d2b8c3-0eac-405b-a1a9-81230ea1a601-ovs-socket\") pod \"nmstate-handler-bm4wc\" (UID: \"96d2b8c3-0eac-405b-a1a9-81230ea1a601\") " pod="openshift-nmstate/nmstate-handler-bm4wc" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.177780 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/96d2b8c3-0eac-405b-a1a9-81230ea1a601-ovs-socket\") pod \"nmstate-handler-bm4wc\" (UID: \"96d2b8c3-0eac-405b-a1a9-81230ea1a601\") " pod="openshift-nmstate/nmstate-handler-bm4wc" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.177927 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/96d2b8c3-0eac-405b-a1a9-81230ea1a601-nmstate-lock\") pod \"nmstate-handler-bm4wc\" (UID: \"96d2b8c3-0eac-405b-a1a9-81230ea1a601\") " pod="openshift-nmstate/nmstate-handler-bm4wc" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.178014 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/96d2b8c3-0eac-405b-a1a9-81230ea1a601-nmstate-lock\") pod \"nmstate-handler-bm4wc\" (UID: \"96d2b8c3-0eac-405b-a1a9-81230ea1a601\") " pod="openshift-nmstate/nmstate-handler-bm4wc" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.178165 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4qvm\" (UniqueName: \"kubernetes.io/projected/96d2b8c3-0eac-405b-a1a9-81230ea1a601-kube-api-access-n4qvm\") pod \"nmstate-handler-bm4wc\" (UID: \"96d2b8c3-0eac-405b-a1a9-81230ea1a601\") " pod="openshift-nmstate/nmstate-handler-bm4wc" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.193769 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4qvm\" (UniqueName: \"kubernetes.io/projected/96d2b8c3-0eac-405b-a1a9-81230ea1a601-kube-api-access-n4qvm\") pod \"nmstate-handler-bm4wc\" (UID: \"96d2b8c3-0eac-405b-a1a9-81230ea1a601\") " pod="openshift-nmstate/nmstate-handler-bm4wc" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.262564 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7889c5555f-kkxnx"] Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.263543 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.279959 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0257c8f-1435-412b-bb50-93540e2dee97-console-oauth-config\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.280004 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0257c8f-1435-412b-bb50-93540e2dee97-console-config\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.280036 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0257c8f-1435-412b-bb50-93540e2dee97-console-serving-cert\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.280084 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0257c8f-1435-412b-bb50-93540e2dee97-service-ca\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.280110 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0257c8f-1435-412b-bb50-93540e2dee97-trusted-ca-bundle\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.280150 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4dfj\" (UniqueName: \"kubernetes.io/projected/7e4599b8-b807-4433-bd32-b134460e028a-kube-api-access-b4dfj\") pod \"nmstate-console-plugin-5c78fc5d65-5zd7d\" (UID: \"7e4599b8-b807-4433-bd32-b134460e028a\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zd7d" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.280179 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e4599b8-b807-4433-bd32-b134460e028a-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-5zd7d\" (UID: \"7e4599b8-b807-4433-bd32-b134460e028a\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zd7d" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.280203 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0257c8f-1435-412b-bb50-93540e2dee97-oauth-serving-cert\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.280223 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl558\" (UniqueName: \"kubernetes.io/projected/c0257c8f-1435-412b-bb50-93540e2dee97-kube-api-access-cl558\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.280241 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7e4599b8-b807-4433-bd32-b134460e028a-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-5zd7d\" (UID: \"7e4599b8-b807-4433-bd32-b134460e028a\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zd7d" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.280331 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7889c5555f-kkxnx"] Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.281626 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7e4599b8-b807-4433-bd32-b134460e028a-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-5zd7d\" (UID: \"7e4599b8-b807-4433-bd32-b134460e028a\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zd7d" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.297961 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e4599b8-b807-4433-bd32-b134460e028a-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-5zd7d\" (UID: \"7e4599b8-b807-4433-bd32-b134460e028a\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zd7d" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.309690 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4dfj\" (UniqueName: \"kubernetes.io/projected/7e4599b8-b807-4433-bd32-b134460e028a-kube-api-access-b4dfj\") pod \"nmstate-console-plugin-5c78fc5d65-5zd7d\" (UID: \"7e4599b8-b807-4433-bd32-b134460e028a\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zd7d" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.311249 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gx9lk" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.335800 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bm4wc" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.381155 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0257c8f-1435-412b-bb50-93540e2dee97-trusted-ca-bundle\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.381238 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0257c8f-1435-412b-bb50-93540e2dee97-oauth-serving-cert\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.381258 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl558\" (UniqueName: \"kubernetes.io/projected/c0257c8f-1435-412b-bb50-93540e2dee97-kube-api-access-cl558\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.381288 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0257c8f-1435-412b-bb50-93540e2dee97-console-oauth-config\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.381311 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0257c8f-1435-412b-bb50-93540e2dee97-console-config\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.381330 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0257c8f-1435-412b-bb50-93540e2dee97-console-serving-cert\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.381348 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0257c8f-1435-412b-bb50-93540e2dee97-service-ca\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.382137 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0257c8f-1435-412b-bb50-93540e2dee97-service-ca\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.382835 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0257c8f-1435-412b-bb50-93540e2dee97-trusted-ca-bundle\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.383324 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0257c8f-1435-412b-bb50-93540e2dee97-oauth-serving-cert\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.384792 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0257c8f-1435-412b-bb50-93540e2dee97-console-config\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.391208 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0257c8f-1435-412b-bb50-93540e2dee97-console-serving-cert\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.392069 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0257c8f-1435-412b-bb50-93540e2dee97-console-oauth-config\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.398998 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zd7d" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.404901 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl558\" (UniqueName: \"kubernetes.io/projected/c0257c8f-1435-412b-bb50-93540e2dee97-kube-api-access-cl558\") pod \"console-7889c5555f-kkxnx\" (UID: \"c0257c8f-1435-412b-bb50-93540e2dee97\") " pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.578452 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.585205 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0d8011f2-2374-49ae-8d2c-f86f2754d7c3-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-p8bnb\" (UID: \"0d8011f2-2374-49ae-8d2c-f86f2754d7c3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-p8bnb" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.592393 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-gx9lk"] Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.596454 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0d8011f2-2374-49ae-8d2c-f86f2754d7c3-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-p8bnb\" (UID: \"0d8011f2-2374-49ae-8d2c-f86f2754d7c3\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-p8bnb" Feb 16 15:19:52 crc kubenswrapper[4835]: W0216 15:19:52.599698 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d8159c8_4992_44d0_a520_113c6b5a6d15.slice/crio-afb2220e29aa077c6e70ad6d34ddd4ecf4eb9c86ed16814d9ad1eaae2d7a1c04 WatchSource:0}: Error finding container afb2220e29aa077c6e70ad6d34ddd4ecf4eb9c86ed16814d9ad1eaae2d7a1c04: Status 404 returned error can't find the container with id afb2220e29aa077c6e70ad6d34ddd4ecf4eb9c86ed16814d9ad1eaae2d7a1c04 Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.621859 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zd7d"] Feb 16 15:19:52 crc kubenswrapper[4835]: W0216 15:19:52.624776 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e4599b8_b807_4433_bd32_b134460e028a.slice/crio-94e15c9bf9913c76082a8ee2e64227308d30c434877bc12949b5750efb49b483 WatchSource:0}: Error finding container 94e15c9bf9913c76082a8ee2e64227308d30c434877bc12949b5750efb49b483: Status 404 returned error can't find the container with id 94e15c9bf9913c76082a8ee2e64227308d30c434877bc12949b5750efb49b483 Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.627370 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-p8bnb" Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.761929 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7889c5555f-kkxnx"] Feb 16 15:19:52 crc kubenswrapper[4835]: I0216 15:19:52.833587 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-p8bnb"] Feb 16 15:19:52 crc kubenswrapper[4835]: W0216 15:19:52.840231 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d8011f2_2374_49ae_8d2c_f86f2754d7c3.slice/crio-1b5ddcd832ed4f394363060dba16dc9244d8d2bc092693c69229dfa1c507f43a WatchSource:0}: Error finding container 1b5ddcd832ed4f394363060dba16dc9244d8d2bc092693c69229dfa1c507f43a: Status 404 returned error can't find the container with id 1b5ddcd832ed4f394363060dba16dc9244d8d2bc092693c69229dfa1c507f43a Feb 16 15:19:53 crc kubenswrapper[4835]: I0216 15:19:53.011083 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gx9lk" event={"ID":"9d8159c8-4992-44d0-a520-113c6b5a6d15","Type":"ContainerStarted","Data":"afb2220e29aa077c6e70ad6d34ddd4ecf4eb9c86ed16814d9ad1eaae2d7a1c04"} Feb 16 15:19:53 crc kubenswrapper[4835]: I0216 15:19:53.012326 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7889c5555f-kkxnx" event={"ID":"c0257c8f-1435-412b-bb50-93540e2dee97","Type":"ContainerStarted","Data":"24ed0201cc1db31e623fcec57fddb9697d3bca6b877742c93492b85d0943307f"} Feb 16 15:19:53 crc kubenswrapper[4835]: I0216 15:19:53.012361 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7889c5555f-kkxnx" event={"ID":"c0257c8f-1435-412b-bb50-93540e2dee97","Type":"ContainerStarted","Data":"ec773090185ab7e9a6eff434b59181df2e81f91289f4f2004175477969aeb0e4"} Feb 16 15:19:53 crc kubenswrapper[4835]: I0216 15:19:53.013204 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zd7d" event={"ID":"7e4599b8-b807-4433-bd32-b134460e028a","Type":"ContainerStarted","Data":"94e15c9bf9913c76082a8ee2e64227308d30c434877bc12949b5750efb49b483"} Feb 16 15:19:53 crc kubenswrapper[4835]: I0216 15:19:53.014166 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bm4wc" event={"ID":"96d2b8c3-0eac-405b-a1a9-81230ea1a601","Type":"ContainerStarted","Data":"220d576a8b8a35954bcc42684d934b83b9dc19747813ed8a2096b1b8001155ca"} Feb 16 15:19:53 crc kubenswrapper[4835]: I0216 15:19:53.015056 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-p8bnb" event={"ID":"0d8011f2-2374-49ae-8d2c-f86f2754d7c3","Type":"ContainerStarted","Data":"1b5ddcd832ed4f394363060dba16dc9244d8d2bc092693c69229dfa1c507f43a"} Feb 16 15:19:53 crc kubenswrapper[4835]: I0216 15:19:53.031960 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7889c5555f-kkxnx" podStartSLOduration=1.031943346 podStartE2EDuration="1.031943346s" podCreationTimestamp="2026-02-16 15:19:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:19:53.030513769 +0000 UTC m=+742.322506694" watchObservedRunningTime="2026-02-16 15:19:53.031943346 +0000 UTC m=+742.323936241" Feb 16 15:19:56 crc kubenswrapper[4835]: I0216 15:19:56.044065 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-p8bnb" event={"ID":"0d8011f2-2374-49ae-8d2c-f86f2754d7c3","Type":"ContainerStarted","Data":"a6c0296e217d78d0dcf11c1f7531c89fb8639edfea51c04c3fd90934314bed26"} Feb 16 15:19:56 crc kubenswrapper[4835]: I0216 15:19:56.044752 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-p8bnb" Feb 16 15:19:56 crc kubenswrapper[4835]: I0216 15:19:56.047335 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gx9lk" event={"ID":"9d8159c8-4992-44d0-a520-113c6b5a6d15","Type":"ContainerStarted","Data":"b4df4aa4d0c647f31bfefa467f99a524514d176e9fa5d9960996993e9ecff00b"} Feb 16 15:19:56 crc kubenswrapper[4835]: I0216 15:19:56.048694 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zd7d" event={"ID":"7e4599b8-b807-4433-bd32-b134460e028a","Type":"ContainerStarted","Data":"50f7934db18e881d2ee4446e35688cecac25e50c5fdf16140e03a13b3062917e"} Feb 16 15:19:56 crc kubenswrapper[4835]: I0216 15:19:56.050375 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bm4wc" event={"ID":"96d2b8c3-0eac-405b-a1a9-81230ea1a601","Type":"ContainerStarted","Data":"78bd9c04e245c2fa3b72009fba6b25d84d0bcb5b9ec159031bf9d56015e52cfb"} Feb 16 15:19:56 crc kubenswrapper[4835]: I0216 15:19:56.050754 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-bm4wc" Feb 16 15:19:56 crc kubenswrapper[4835]: I0216 15:19:56.089681 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-5zd7d" podStartSLOduration=1.22770647 podStartE2EDuration="4.089659958s" podCreationTimestamp="2026-02-16 15:19:52 +0000 UTC" firstStartedPulling="2026-02-16 15:19:52.627293591 +0000 UTC m=+741.919286486" lastFinishedPulling="2026-02-16 15:19:55.489247039 +0000 UTC m=+744.781239974" observedRunningTime="2026-02-16 15:19:56.088151519 +0000 UTC m=+745.380144414" watchObservedRunningTime="2026-02-16 15:19:56.089659958 +0000 UTC m=+745.381652853" Feb 16 15:19:56 crc kubenswrapper[4835]: I0216 15:19:56.091763 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-p8bnb" podStartSLOduration=2.444024411 podStartE2EDuration="5.091755752s" podCreationTimestamp="2026-02-16 15:19:51 +0000 UTC" firstStartedPulling="2026-02-16 15:19:52.843430138 +0000 UTC m=+742.135423043" lastFinishedPulling="2026-02-16 15:19:55.491161489 +0000 UTC m=+744.783154384" observedRunningTime="2026-02-16 15:19:56.065077456 +0000 UTC m=+745.357070351" watchObservedRunningTime="2026-02-16 15:19:56.091755752 +0000 UTC m=+745.383748647" Feb 16 15:19:56 crc kubenswrapper[4835]: I0216 15:19:56.122600 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-bm4wc" podStartSLOduration=2.029863652 podStartE2EDuration="5.122578134s" podCreationTimestamp="2026-02-16 15:19:51 +0000 UTC" firstStartedPulling="2026-02-16 15:19:52.395899371 +0000 UTC m=+741.687892266" lastFinishedPulling="2026-02-16 15:19:55.488613853 +0000 UTC m=+744.780606748" observedRunningTime="2026-02-16 15:19:56.119568267 +0000 UTC m=+745.411561162" watchObservedRunningTime="2026-02-16 15:19:56.122578134 +0000 UTC m=+745.414571029" Feb 16 15:19:58 crc kubenswrapper[4835]: I0216 15:19:58.062978 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gx9lk" event={"ID":"9d8159c8-4992-44d0-a520-113c6b5a6d15","Type":"ContainerStarted","Data":"2db5690e7f2ca1209219dc52d8cc5d73a01655bec571c820859afa1ee36b1a3b"} Feb 16 15:19:58 crc kubenswrapper[4835]: I0216 15:19:58.079610 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-gx9lk" podStartSLOduration=1.8129427439999999 podStartE2EDuration="7.079594904s" podCreationTimestamp="2026-02-16 15:19:51 +0000 UTC" firstStartedPulling="2026-02-16 15:19:52.607169413 +0000 UTC m=+741.899162308" lastFinishedPulling="2026-02-16 15:19:57.873821573 +0000 UTC m=+747.165814468" observedRunningTime="2026-02-16 15:19:58.078135136 +0000 UTC m=+747.370128031" watchObservedRunningTime="2026-02-16 15:19:58.079594904 +0000 UTC m=+747.371587799" Feb 16 15:20:02 crc kubenswrapper[4835]: I0216 15:20:02.359455 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-bm4wc" Feb 16 15:20:02 crc kubenswrapper[4835]: I0216 15:20:02.578972 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:20:02 crc kubenswrapper[4835]: I0216 15:20:02.579022 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:20:02 crc kubenswrapper[4835]: I0216 15:20:02.584043 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:20:03 crc kubenswrapper[4835]: I0216 15:20:03.097785 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7889c5555f-kkxnx" Feb 16 15:20:03 crc kubenswrapper[4835]: I0216 15:20:03.167849 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-28xh9"] Feb 16 15:20:04 crc kubenswrapper[4835]: I0216 15:20:04.327786 4835 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 15:20:12 crc kubenswrapper[4835]: I0216 15:20:12.633502 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-p8bnb" Feb 16 15:20:18 crc kubenswrapper[4835]: I0216 15:20:18.587483 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:20:18 crc kubenswrapper[4835]: I0216 15:20:18.587881 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:20:18 crc kubenswrapper[4835]: I0216 15:20:18.587943 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:20:18 crc kubenswrapper[4835]: I0216 15:20:18.588819 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73c350b3ac02f46ce7b27cf1db88e1f50effcba02c6c2d6096a643a3b0037668"} pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:20:18 crc kubenswrapper[4835]: I0216 15:20:18.588908 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" containerID="cri-o://73c350b3ac02f46ce7b27cf1db88e1f50effcba02c6c2d6096a643a3b0037668" gracePeriod=600 Feb 16 15:20:19 crc kubenswrapper[4835]: I0216 15:20:19.207450 4835 generic.go:334] "Generic (PLEG): container finished" podID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerID="73c350b3ac02f46ce7b27cf1db88e1f50effcba02c6c2d6096a643a3b0037668" exitCode=0 Feb 16 15:20:19 crc kubenswrapper[4835]: I0216 15:20:19.207573 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerDied","Data":"73c350b3ac02f46ce7b27cf1db88e1f50effcba02c6c2d6096a643a3b0037668"} Feb 16 15:20:19 crc kubenswrapper[4835]: I0216 15:20:19.207865 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerStarted","Data":"fccec89c350093c4d7a854530c72eda475f0a4084457fa6bd80b80278b734735"} Feb 16 15:20:19 crc kubenswrapper[4835]: I0216 15:20:19.207887 4835 scope.go:117] "RemoveContainer" containerID="1a28f0d6525f9971d77742b65377602c37eac99fbd17ba56f4ecbef96e8a8ccd" Feb 16 15:20:27 crc kubenswrapper[4835]: I0216 15:20:27.809021 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22"] Feb 16 15:20:27 crc kubenswrapper[4835]: I0216 15:20:27.810964 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" Feb 16 15:20:27 crc kubenswrapper[4835]: I0216 15:20:27.814484 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 15:20:27 crc kubenswrapper[4835]: I0216 15:20:27.820701 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22"] Feb 16 15:20:27 crc kubenswrapper[4835]: I0216 15:20:27.901446 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l86v4\" (UniqueName: \"kubernetes.io/projected/54746025-5068-4cc5-ba0a-a24755a67627-kube-api-access-l86v4\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22\" (UID: \"54746025-5068-4cc5-ba0a-a24755a67627\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" Feb 16 15:20:27 crc kubenswrapper[4835]: I0216 15:20:27.901496 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54746025-5068-4cc5-ba0a-a24755a67627-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22\" (UID: \"54746025-5068-4cc5-ba0a-a24755a67627\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" Feb 16 15:20:27 crc kubenswrapper[4835]: I0216 15:20:27.901518 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54746025-5068-4cc5-ba0a-a24755a67627-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22\" (UID: \"54746025-5068-4cc5-ba0a-a24755a67627\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.002379 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54746025-5068-4cc5-ba0a-a24755a67627-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22\" (UID: \"54746025-5068-4cc5-ba0a-a24755a67627\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.002425 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54746025-5068-4cc5-ba0a-a24755a67627-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22\" (UID: \"54746025-5068-4cc5-ba0a-a24755a67627\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.002506 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l86v4\" (UniqueName: \"kubernetes.io/projected/54746025-5068-4cc5-ba0a-a24755a67627-kube-api-access-l86v4\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22\" (UID: \"54746025-5068-4cc5-ba0a-a24755a67627\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.003155 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54746025-5068-4cc5-ba0a-a24755a67627-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22\" (UID: \"54746025-5068-4cc5-ba0a-a24755a67627\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.003253 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54746025-5068-4cc5-ba0a-a24755a67627-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22\" (UID: \"54746025-5068-4cc5-ba0a-a24755a67627\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.020362 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l86v4\" (UniqueName: \"kubernetes.io/projected/54746025-5068-4cc5-ba0a-a24755a67627-kube-api-access-l86v4\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22\" (UID: \"54746025-5068-4cc5-ba0a-a24755a67627\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.133047 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.216859 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-28xh9" podUID="3d329678-3edc-4b70-9796-85c6ada120de" containerName="console" containerID="cri-o://9091de00da22a965f525c5401aec4e9c53a484ca6cb25ff25378ff1bf2e32b4b" gracePeriod=15 Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.419399 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22"] Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.547711 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-28xh9_3d329678-3edc-4b70-9796-85c6ada120de/console/0.log" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.547961 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.712189 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3d329678-3edc-4b70-9796-85c6ada120de-console-oauth-config\") pod \"3d329678-3edc-4b70-9796-85c6ada120de\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.712260 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-service-ca\") pod \"3d329678-3edc-4b70-9796-85c6ada120de\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.712355 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d329678-3edc-4b70-9796-85c6ada120de-console-serving-cert\") pod \"3d329678-3edc-4b70-9796-85c6ada120de\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.713184 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-service-ca" (OuterVolumeSpecName: "service-ca") pod "3d329678-3edc-4b70-9796-85c6ada120de" (UID: "3d329678-3edc-4b70-9796-85c6ada120de"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.713569 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-oauth-serving-cert\") pod \"3d329678-3edc-4b70-9796-85c6ada120de\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.713623 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbxtk\" (UniqueName: \"kubernetes.io/projected/3d329678-3edc-4b70-9796-85c6ada120de-kube-api-access-tbxtk\") pod \"3d329678-3edc-4b70-9796-85c6ada120de\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.713665 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-trusted-ca-bundle\") pod \"3d329678-3edc-4b70-9796-85c6ada120de\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.713690 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-console-config\") pod \"3d329678-3edc-4b70-9796-85c6ada120de\" (UID: \"3d329678-3edc-4b70-9796-85c6ada120de\") " Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.713993 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.714125 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3d329678-3edc-4b70-9796-85c6ada120de" (UID: "3d329678-3edc-4b70-9796-85c6ada120de"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.714230 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3d329678-3edc-4b70-9796-85c6ada120de" (UID: "3d329678-3edc-4b70-9796-85c6ada120de"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.714306 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-console-config" (OuterVolumeSpecName: "console-config") pod "3d329678-3edc-4b70-9796-85c6ada120de" (UID: "3d329678-3edc-4b70-9796-85c6ada120de"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.718937 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d329678-3edc-4b70-9796-85c6ada120de-kube-api-access-tbxtk" (OuterVolumeSpecName: "kube-api-access-tbxtk") pod "3d329678-3edc-4b70-9796-85c6ada120de" (UID: "3d329678-3edc-4b70-9796-85c6ada120de"). InnerVolumeSpecName "kube-api-access-tbxtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.719561 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d329678-3edc-4b70-9796-85c6ada120de-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3d329678-3edc-4b70-9796-85c6ada120de" (UID: "3d329678-3edc-4b70-9796-85c6ada120de"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.719849 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d329678-3edc-4b70-9796-85c6ada120de-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3d329678-3edc-4b70-9796-85c6ada120de" (UID: "3d329678-3edc-4b70-9796-85c6ada120de"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.814797 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.814863 4835 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.814883 4835 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3d329678-3edc-4b70-9796-85c6ada120de-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.814894 4835 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d329678-3edc-4b70-9796-85c6ada120de-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.814905 4835 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3d329678-3edc-4b70-9796-85c6ada120de-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:28 crc kubenswrapper[4835]: I0216 15:20:28.814917 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbxtk\" (UniqueName: \"kubernetes.io/projected/3d329678-3edc-4b70-9796-85c6ada120de-kube-api-access-tbxtk\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:29 crc kubenswrapper[4835]: I0216 15:20:29.307755 4835 generic.go:334] "Generic (PLEG): container finished" podID="54746025-5068-4cc5-ba0a-a24755a67627" containerID="183e1217bc6c70254327bffcb64f3f666e1f86561872fcdfcf576b19bb9931d5" exitCode=0 Feb 16 15:20:29 crc kubenswrapper[4835]: I0216 15:20:29.307847 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" event={"ID":"54746025-5068-4cc5-ba0a-a24755a67627","Type":"ContainerDied","Data":"183e1217bc6c70254327bffcb64f3f666e1f86561872fcdfcf576b19bb9931d5"} Feb 16 15:20:29 crc kubenswrapper[4835]: I0216 15:20:29.307883 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" event={"ID":"54746025-5068-4cc5-ba0a-a24755a67627","Type":"ContainerStarted","Data":"16211f744b8a14dda7c949cd63703b91d7bdadb27d995e61496224bdd040ffc4"} Feb 16 15:20:29 crc kubenswrapper[4835]: I0216 15:20:29.310291 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-28xh9_3d329678-3edc-4b70-9796-85c6ada120de/console/0.log" Feb 16 15:20:29 crc kubenswrapper[4835]: I0216 15:20:29.310340 4835 generic.go:334] "Generic (PLEG): container finished" podID="3d329678-3edc-4b70-9796-85c6ada120de" containerID="9091de00da22a965f525c5401aec4e9c53a484ca6cb25ff25378ff1bf2e32b4b" exitCode=2 Feb 16 15:20:29 crc kubenswrapper[4835]: I0216 15:20:29.310369 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-28xh9" event={"ID":"3d329678-3edc-4b70-9796-85c6ada120de","Type":"ContainerDied","Data":"9091de00da22a965f525c5401aec4e9c53a484ca6cb25ff25378ff1bf2e32b4b"} Feb 16 15:20:29 crc kubenswrapper[4835]: I0216 15:20:29.310391 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-28xh9" event={"ID":"3d329678-3edc-4b70-9796-85c6ada120de","Type":"ContainerDied","Data":"1a48093372db7728dc7fe2be114ed60de2f868a15910037967158578bcc1c509"} Feb 16 15:20:29 crc kubenswrapper[4835]: I0216 15:20:29.310408 4835 scope.go:117] "RemoveContainer" containerID="9091de00da22a965f525c5401aec4e9c53a484ca6cb25ff25378ff1bf2e32b4b" Feb 16 15:20:29 crc kubenswrapper[4835]: I0216 15:20:29.310427 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-28xh9" Feb 16 15:20:29 crc kubenswrapper[4835]: I0216 15:20:29.336460 4835 scope.go:117] "RemoveContainer" containerID="9091de00da22a965f525c5401aec4e9c53a484ca6cb25ff25378ff1bf2e32b4b" Feb 16 15:20:29 crc kubenswrapper[4835]: E0216 15:20:29.337018 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9091de00da22a965f525c5401aec4e9c53a484ca6cb25ff25378ff1bf2e32b4b\": container with ID starting with 9091de00da22a965f525c5401aec4e9c53a484ca6cb25ff25378ff1bf2e32b4b not found: ID does not exist" containerID="9091de00da22a965f525c5401aec4e9c53a484ca6cb25ff25378ff1bf2e32b4b" Feb 16 15:20:29 crc kubenswrapper[4835]: I0216 15:20:29.337067 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9091de00da22a965f525c5401aec4e9c53a484ca6cb25ff25378ff1bf2e32b4b"} err="failed to get container status \"9091de00da22a965f525c5401aec4e9c53a484ca6cb25ff25378ff1bf2e32b4b\": rpc error: code = NotFound desc = could not find container \"9091de00da22a965f525c5401aec4e9c53a484ca6cb25ff25378ff1bf2e32b4b\": container with ID starting with 9091de00da22a965f525c5401aec4e9c53a484ca6cb25ff25378ff1bf2e32b4b not found: ID does not exist" Feb 16 15:20:29 crc kubenswrapper[4835]: I0216 15:20:29.351033 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-28xh9"] Feb 16 15:20:29 crc kubenswrapper[4835]: I0216 15:20:29.358954 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-28xh9"] Feb 16 15:20:29 crc kubenswrapper[4835]: I0216 15:20:29.390430 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d329678-3edc-4b70-9796-85c6ada120de" path="/var/lib/kubelet/pods/3d329678-3edc-4b70-9796-85c6ada120de/volumes" Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.170628 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vmwm9"] Feb 16 15:20:31 crc kubenswrapper[4835]: E0216 15:20:31.173568 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d329678-3edc-4b70-9796-85c6ada120de" containerName="console" Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.173649 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d329678-3edc-4b70-9796-85c6ada120de" containerName="console" Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.173907 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d329678-3edc-4b70-9796-85c6ada120de" containerName="console" Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.175966 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmwm9" Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.185389 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmwm9"] Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.246656 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f5b09b5-923d-4039-ab09-53d46aeb2be7-utilities\") pod \"redhat-operators-vmwm9\" (UID: \"2f5b09b5-923d-4039-ab09-53d46aeb2be7\") " pod="openshift-marketplace/redhat-operators-vmwm9" Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.246944 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f5b09b5-923d-4039-ab09-53d46aeb2be7-catalog-content\") pod \"redhat-operators-vmwm9\" (UID: \"2f5b09b5-923d-4039-ab09-53d46aeb2be7\") " pod="openshift-marketplace/redhat-operators-vmwm9" Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.247042 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pctb\" (UniqueName: \"kubernetes.io/projected/2f5b09b5-923d-4039-ab09-53d46aeb2be7-kube-api-access-6pctb\") pod \"redhat-operators-vmwm9\" (UID: \"2f5b09b5-923d-4039-ab09-53d46aeb2be7\") " pod="openshift-marketplace/redhat-operators-vmwm9" Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.327949 4835 generic.go:334] "Generic (PLEG): container finished" podID="54746025-5068-4cc5-ba0a-a24755a67627" containerID="7fd1dbc88b4f2fd131bda1cd76009036c774de78cdb12aec21fa60a71881277d" exitCode=0 Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.328014 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" event={"ID":"54746025-5068-4cc5-ba0a-a24755a67627","Type":"ContainerDied","Data":"7fd1dbc88b4f2fd131bda1cd76009036c774de78cdb12aec21fa60a71881277d"} Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.347808 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f5b09b5-923d-4039-ab09-53d46aeb2be7-utilities\") pod \"redhat-operators-vmwm9\" (UID: \"2f5b09b5-923d-4039-ab09-53d46aeb2be7\") " pod="openshift-marketplace/redhat-operators-vmwm9" Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.347897 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f5b09b5-923d-4039-ab09-53d46aeb2be7-catalog-content\") pod \"redhat-operators-vmwm9\" (UID: \"2f5b09b5-923d-4039-ab09-53d46aeb2be7\") " pod="openshift-marketplace/redhat-operators-vmwm9" Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.348278 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pctb\" (UniqueName: \"kubernetes.io/projected/2f5b09b5-923d-4039-ab09-53d46aeb2be7-kube-api-access-6pctb\") pod \"redhat-operators-vmwm9\" (UID: \"2f5b09b5-923d-4039-ab09-53d46aeb2be7\") " pod="openshift-marketplace/redhat-operators-vmwm9" Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.348313 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f5b09b5-923d-4039-ab09-53d46aeb2be7-catalog-content\") pod \"redhat-operators-vmwm9\" (UID: \"2f5b09b5-923d-4039-ab09-53d46aeb2be7\") " pod="openshift-marketplace/redhat-operators-vmwm9" Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.348603 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f5b09b5-923d-4039-ab09-53d46aeb2be7-utilities\") pod \"redhat-operators-vmwm9\" (UID: \"2f5b09b5-923d-4039-ab09-53d46aeb2be7\") " pod="openshift-marketplace/redhat-operators-vmwm9" Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.374687 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pctb\" (UniqueName: \"kubernetes.io/projected/2f5b09b5-923d-4039-ab09-53d46aeb2be7-kube-api-access-6pctb\") pod \"redhat-operators-vmwm9\" (UID: \"2f5b09b5-923d-4039-ab09-53d46aeb2be7\") " pod="openshift-marketplace/redhat-operators-vmwm9" Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.536248 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmwm9" Feb 16 15:20:31 crc kubenswrapper[4835]: I0216 15:20:31.948778 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmwm9"] Feb 16 15:20:32 crc kubenswrapper[4835]: I0216 15:20:32.336031 4835 generic.go:334] "Generic (PLEG): container finished" podID="54746025-5068-4cc5-ba0a-a24755a67627" containerID="06549bc15815608886f351c94cfbea5c538f58509b4de04e4fd93e47d7337a43" exitCode=0 Feb 16 15:20:32 crc kubenswrapper[4835]: I0216 15:20:32.336109 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" event={"ID":"54746025-5068-4cc5-ba0a-a24755a67627","Type":"ContainerDied","Data":"06549bc15815608886f351c94cfbea5c538f58509b4de04e4fd93e47d7337a43"} Feb 16 15:20:32 crc kubenswrapper[4835]: I0216 15:20:32.338059 4835 generic.go:334] "Generic (PLEG): container finished" podID="2f5b09b5-923d-4039-ab09-53d46aeb2be7" containerID="d114b4917ad301ba92a0baf425cea01a547fbe348526d6b19a3ec393428c9bc9" exitCode=0 Feb 16 15:20:32 crc kubenswrapper[4835]: I0216 15:20:32.338111 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmwm9" event={"ID":"2f5b09b5-923d-4039-ab09-53d46aeb2be7","Type":"ContainerDied","Data":"d114b4917ad301ba92a0baf425cea01a547fbe348526d6b19a3ec393428c9bc9"} Feb 16 15:20:32 crc kubenswrapper[4835]: I0216 15:20:32.338134 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmwm9" event={"ID":"2f5b09b5-923d-4039-ab09-53d46aeb2be7","Type":"ContainerStarted","Data":"c550f7a50ddd2f4bc5778edae07397102025b43b9db2bab86d2c60b8fb63d58f"} Feb 16 15:20:33 crc kubenswrapper[4835]: I0216 15:20:33.345160 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmwm9" event={"ID":"2f5b09b5-923d-4039-ab09-53d46aeb2be7","Type":"ContainerStarted","Data":"20b0eca6fe384f1bbb6e6a1e1f60d9f083783ce99a2996d5ea3529d2b337fdf4"} Feb 16 15:20:33 crc kubenswrapper[4835]: I0216 15:20:33.611038 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" Feb 16 15:20:33 crc kubenswrapper[4835]: I0216 15:20:33.779309 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54746025-5068-4cc5-ba0a-a24755a67627-bundle\") pod \"54746025-5068-4cc5-ba0a-a24755a67627\" (UID: \"54746025-5068-4cc5-ba0a-a24755a67627\") " Feb 16 15:20:33 crc kubenswrapper[4835]: I0216 15:20:33.779386 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l86v4\" (UniqueName: \"kubernetes.io/projected/54746025-5068-4cc5-ba0a-a24755a67627-kube-api-access-l86v4\") pod \"54746025-5068-4cc5-ba0a-a24755a67627\" (UID: \"54746025-5068-4cc5-ba0a-a24755a67627\") " Feb 16 15:20:33 crc kubenswrapper[4835]: I0216 15:20:33.779443 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54746025-5068-4cc5-ba0a-a24755a67627-util\") pod \"54746025-5068-4cc5-ba0a-a24755a67627\" (UID: \"54746025-5068-4cc5-ba0a-a24755a67627\") " Feb 16 15:20:33 crc kubenswrapper[4835]: I0216 15:20:33.781148 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54746025-5068-4cc5-ba0a-a24755a67627-bundle" (OuterVolumeSpecName: "bundle") pod "54746025-5068-4cc5-ba0a-a24755a67627" (UID: "54746025-5068-4cc5-ba0a-a24755a67627"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4835]: I0216 15:20:33.785494 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54746025-5068-4cc5-ba0a-a24755a67627-kube-api-access-l86v4" (OuterVolumeSpecName: "kube-api-access-l86v4") pod "54746025-5068-4cc5-ba0a-a24755a67627" (UID: "54746025-5068-4cc5-ba0a-a24755a67627"). InnerVolumeSpecName "kube-api-access-l86v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4835]: I0216 15:20:33.881268 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54746025-5068-4cc5-ba0a-a24755a67627-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4835]: I0216 15:20:33.881304 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l86v4\" (UniqueName: \"kubernetes.io/projected/54746025-5068-4cc5-ba0a-a24755a67627-kube-api-access-l86v4\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:33 crc kubenswrapper[4835]: I0216 15:20:33.926867 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54746025-5068-4cc5-ba0a-a24755a67627-util" (OuterVolumeSpecName: "util") pod "54746025-5068-4cc5-ba0a-a24755a67627" (UID: "54746025-5068-4cc5-ba0a-a24755a67627"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:20:33 crc kubenswrapper[4835]: I0216 15:20:33.982289 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54746025-5068-4cc5-ba0a-a24755a67627-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:34 crc kubenswrapper[4835]: I0216 15:20:34.351724 4835 generic.go:334] "Generic (PLEG): container finished" podID="2f5b09b5-923d-4039-ab09-53d46aeb2be7" containerID="20b0eca6fe384f1bbb6e6a1e1f60d9f083783ce99a2996d5ea3529d2b337fdf4" exitCode=0 Feb 16 15:20:34 crc kubenswrapper[4835]: I0216 15:20:34.351794 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmwm9" event={"ID":"2f5b09b5-923d-4039-ab09-53d46aeb2be7","Type":"ContainerDied","Data":"20b0eca6fe384f1bbb6e6a1e1f60d9f083783ce99a2996d5ea3529d2b337fdf4"} Feb 16 15:20:34 crc kubenswrapper[4835]: I0216 15:20:34.354101 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" event={"ID":"54746025-5068-4cc5-ba0a-a24755a67627","Type":"ContainerDied","Data":"16211f744b8a14dda7c949cd63703b91d7bdadb27d995e61496224bdd040ffc4"} Feb 16 15:20:34 crc kubenswrapper[4835]: I0216 15:20:34.354147 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16211f744b8a14dda7c949cd63703b91d7bdadb27d995e61496224bdd040ffc4" Feb 16 15:20:34 crc kubenswrapper[4835]: I0216 15:20:34.354220 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22" Feb 16 15:20:35 crc kubenswrapper[4835]: I0216 15:20:35.363397 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmwm9" event={"ID":"2f5b09b5-923d-4039-ab09-53d46aeb2be7","Type":"ContainerStarted","Data":"099b3bfa28c202e6f1f9e65053ed331f87e9f1f572245b7059f9710c10376352"} Feb 16 15:20:35 crc kubenswrapper[4835]: I0216 15:20:35.389756 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vmwm9" podStartSLOduration=1.9980087640000002 podStartE2EDuration="4.389733334s" podCreationTimestamp="2026-02-16 15:20:31 +0000 UTC" firstStartedPulling="2026-02-16 15:20:32.339452692 +0000 UTC m=+781.631445587" lastFinishedPulling="2026-02-16 15:20:34.731177262 +0000 UTC m=+784.023170157" observedRunningTime="2026-02-16 15:20:35.38647807 +0000 UTC m=+784.678470995" watchObservedRunningTime="2026-02-16 15:20:35.389733334 +0000 UTC m=+784.681726239" Feb 16 15:20:41 crc kubenswrapper[4835]: I0216 15:20:41.536981 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vmwm9" Feb 16 15:20:41 crc kubenswrapper[4835]: I0216 15:20:41.537575 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vmwm9" Feb 16 15:20:41 crc kubenswrapper[4835]: I0216 15:20:41.604591 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vmwm9" Feb 16 15:20:42 crc kubenswrapper[4835]: I0216 15:20:42.453239 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vmwm9" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.165687 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmwm9"] Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.584820 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79"] Feb 16 15:20:43 crc kubenswrapper[4835]: E0216 15:20:43.585408 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54746025-5068-4cc5-ba0a-a24755a67627" containerName="pull" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.585428 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="54746025-5068-4cc5-ba0a-a24755a67627" containerName="pull" Feb 16 15:20:43 crc kubenswrapper[4835]: E0216 15:20:43.585442 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54746025-5068-4cc5-ba0a-a24755a67627" containerName="extract" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.585451 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="54746025-5068-4cc5-ba0a-a24755a67627" containerName="extract" Feb 16 15:20:43 crc kubenswrapper[4835]: E0216 15:20:43.585466 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54746025-5068-4cc5-ba0a-a24755a67627" containerName="util" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.585475 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="54746025-5068-4cc5-ba0a-a24755a67627" containerName="util" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.585638 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="54746025-5068-4cc5-ba0a-a24755a67627" containerName="extract" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.586133 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.589786 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.590056 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.590077 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.592108 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.593445 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-wzpg8" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.602890 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79"] Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.724496 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4aa6fafd-b26a-4c22-b680-7123fabb665e-apiservice-cert\") pod \"metallb-operator-controller-manager-5dccc995f8-z6x79\" (UID: \"4aa6fafd-b26a-4c22-b680-7123fabb665e\") " pod="metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.724635 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4aa6fafd-b26a-4c22-b680-7123fabb665e-webhook-cert\") pod \"metallb-operator-controller-manager-5dccc995f8-z6x79\" (UID: \"4aa6fafd-b26a-4c22-b680-7123fabb665e\") " pod="metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.724853 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt2xg\" (UniqueName: \"kubernetes.io/projected/4aa6fafd-b26a-4c22-b680-7123fabb665e-kube-api-access-bt2xg\") pod \"metallb-operator-controller-manager-5dccc995f8-z6x79\" (UID: \"4aa6fafd-b26a-4c22-b680-7123fabb665e\") " pod="metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.825776 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4aa6fafd-b26a-4c22-b680-7123fabb665e-apiservice-cert\") pod \"metallb-operator-controller-manager-5dccc995f8-z6x79\" (UID: \"4aa6fafd-b26a-4c22-b680-7123fabb665e\") " pod="metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.825831 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4aa6fafd-b26a-4c22-b680-7123fabb665e-webhook-cert\") pod \"metallb-operator-controller-manager-5dccc995f8-z6x79\" (UID: \"4aa6fafd-b26a-4c22-b680-7123fabb665e\") " pod="metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.825894 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt2xg\" (UniqueName: \"kubernetes.io/projected/4aa6fafd-b26a-4c22-b680-7123fabb665e-kube-api-access-bt2xg\") pod \"metallb-operator-controller-manager-5dccc995f8-z6x79\" (UID: \"4aa6fafd-b26a-4c22-b680-7123fabb665e\") " pod="metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.845478 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4aa6fafd-b26a-4c22-b680-7123fabb665e-webhook-cert\") pod \"metallb-operator-controller-manager-5dccc995f8-z6x79\" (UID: \"4aa6fafd-b26a-4c22-b680-7123fabb665e\") " pod="metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.845489 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4aa6fafd-b26a-4c22-b680-7123fabb665e-apiservice-cert\") pod \"metallb-operator-controller-manager-5dccc995f8-z6x79\" (UID: \"4aa6fafd-b26a-4c22-b680-7123fabb665e\") " pod="metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.850154 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt2xg\" (UniqueName: \"kubernetes.io/projected/4aa6fafd-b26a-4c22-b680-7123fabb665e-kube-api-access-bt2xg\") pod \"metallb-operator-controller-manager-5dccc995f8-z6x79\" (UID: \"4aa6fafd-b26a-4c22-b680-7123fabb665e\") " pod="metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.903292 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.913300 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx"] Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.914024 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.918039 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.918909 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-g5smb" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.919065 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 15:20:43 crc kubenswrapper[4835]: I0216 15:20:43.948445 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx"] Feb 16 15:20:44 crc kubenswrapper[4835]: I0216 15:20:44.028755 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d8c24f5-95ab-48d4-9007-75a10c8e743a-webhook-cert\") pod \"metallb-operator-webhook-server-58dccf7f7b-krtpx\" (UID: \"1d8c24f5-95ab-48d4-9007-75a10c8e743a\") " pod="metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx" Feb 16 15:20:44 crc kubenswrapper[4835]: I0216 15:20:44.028807 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p72b8\" (UniqueName: \"kubernetes.io/projected/1d8c24f5-95ab-48d4-9007-75a10c8e743a-kube-api-access-p72b8\") pod \"metallb-operator-webhook-server-58dccf7f7b-krtpx\" (UID: \"1d8c24f5-95ab-48d4-9007-75a10c8e743a\") " pod="metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx" Feb 16 15:20:44 crc kubenswrapper[4835]: I0216 15:20:44.028836 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d8c24f5-95ab-48d4-9007-75a10c8e743a-apiservice-cert\") pod \"metallb-operator-webhook-server-58dccf7f7b-krtpx\" (UID: \"1d8c24f5-95ab-48d4-9007-75a10c8e743a\") " pod="metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx" Feb 16 15:20:44 crc kubenswrapper[4835]: I0216 15:20:44.129609 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d8c24f5-95ab-48d4-9007-75a10c8e743a-apiservice-cert\") pod \"metallb-operator-webhook-server-58dccf7f7b-krtpx\" (UID: \"1d8c24f5-95ab-48d4-9007-75a10c8e743a\") " pod="metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx" Feb 16 15:20:44 crc kubenswrapper[4835]: I0216 15:20:44.129698 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d8c24f5-95ab-48d4-9007-75a10c8e743a-webhook-cert\") pod \"metallb-operator-webhook-server-58dccf7f7b-krtpx\" (UID: \"1d8c24f5-95ab-48d4-9007-75a10c8e743a\") " pod="metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx" Feb 16 15:20:44 crc kubenswrapper[4835]: I0216 15:20:44.129737 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p72b8\" (UniqueName: \"kubernetes.io/projected/1d8c24f5-95ab-48d4-9007-75a10c8e743a-kube-api-access-p72b8\") pod \"metallb-operator-webhook-server-58dccf7f7b-krtpx\" (UID: \"1d8c24f5-95ab-48d4-9007-75a10c8e743a\") " pod="metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx" Feb 16 15:20:44 crc kubenswrapper[4835]: I0216 15:20:44.134082 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d8c24f5-95ab-48d4-9007-75a10c8e743a-apiservice-cert\") pod \"metallb-operator-webhook-server-58dccf7f7b-krtpx\" (UID: \"1d8c24f5-95ab-48d4-9007-75a10c8e743a\") " pod="metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx" Feb 16 15:20:44 crc kubenswrapper[4835]: I0216 15:20:44.134107 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d8c24f5-95ab-48d4-9007-75a10c8e743a-webhook-cert\") pod \"metallb-operator-webhook-server-58dccf7f7b-krtpx\" (UID: \"1d8c24f5-95ab-48d4-9007-75a10c8e743a\") " pod="metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx" Feb 16 15:20:44 crc kubenswrapper[4835]: I0216 15:20:44.150224 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p72b8\" (UniqueName: \"kubernetes.io/projected/1d8c24f5-95ab-48d4-9007-75a10c8e743a-kube-api-access-p72b8\") pod \"metallb-operator-webhook-server-58dccf7f7b-krtpx\" (UID: \"1d8c24f5-95ab-48d4-9007-75a10c8e743a\") " pod="metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx" Feb 16 15:20:44 crc kubenswrapper[4835]: I0216 15:20:44.268418 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx" Feb 16 15:20:44 crc kubenswrapper[4835]: I0216 15:20:44.374494 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79"] Feb 16 15:20:44 crc kubenswrapper[4835]: W0216 15:20:44.388730 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aa6fafd_b26a_4c22_b680_7123fabb665e.slice/crio-5035ac0469a59e9114544c0459f0bbd4fa670f5feabe2b9b96e72a570d4ee987 WatchSource:0}: Error finding container 5035ac0469a59e9114544c0459f0bbd4fa670f5feabe2b9b96e72a570d4ee987: Status 404 returned error can't find the container with id 5035ac0469a59e9114544c0459f0bbd4fa670f5feabe2b9b96e72a570d4ee987 Feb 16 15:20:44 crc kubenswrapper[4835]: I0216 15:20:44.432369 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vmwm9" podUID="2f5b09b5-923d-4039-ab09-53d46aeb2be7" containerName="registry-server" containerID="cri-o://099b3bfa28c202e6f1f9e65053ed331f87e9f1f572245b7059f9710c10376352" gracePeriod=2 Feb 16 15:20:44 crc kubenswrapper[4835]: I0216 15:20:44.432780 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79" event={"ID":"4aa6fafd-b26a-4c22-b680-7123fabb665e","Type":"ContainerStarted","Data":"5035ac0469a59e9114544c0459f0bbd4fa670f5feabe2b9b96e72a570d4ee987"} Feb 16 15:20:44 crc kubenswrapper[4835]: I0216 15:20:44.497508 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx"] Feb 16 15:20:45 crc kubenswrapper[4835]: I0216 15:20:45.450852 4835 generic.go:334] "Generic (PLEG): container finished" podID="2f5b09b5-923d-4039-ab09-53d46aeb2be7" containerID="099b3bfa28c202e6f1f9e65053ed331f87e9f1f572245b7059f9710c10376352" exitCode=0 Feb 16 15:20:45 crc kubenswrapper[4835]: I0216 15:20:45.452323 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmwm9" event={"ID":"2f5b09b5-923d-4039-ab09-53d46aeb2be7","Type":"ContainerDied","Data":"099b3bfa28c202e6f1f9e65053ed331f87e9f1f572245b7059f9710c10376352"} Feb 16 15:20:45 crc kubenswrapper[4835]: I0216 15:20:45.454821 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx" event={"ID":"1d8c24f5-95ab-48d4-9007-75a10c8e743a","Type":"ContainerStarted","Data":"26dfadbf36a16ffe0e21c4555a218d7bb472320fe494df419c5bb87c0918e59b"} Feb 16 15:20:45 crc kubenswrapper[4835]: I0216 15:20:45.640140 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmwm9" Feb 16 15:20:45 crc kubenswrapper[4835]: I0216 15:20:45.750496 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f5b09b5-923d-4039-ab09-53d46aeb2be7-catalog-content\") pod \"2f5b09b5-923d-4039-ab09-53d46aeb2be7\" (UID: \"2f5b09b5-923d-4039-ab09-53d46aeb2be7\") " Feb 16 15:20:45 crc kubenswrapper[4835]: I0216 15:20:45.750899 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f5b09b5-923d-4039-ab09-53d46aeb2be7-utilities\") pod \"2f5b09b5-923d-4039-ab09-53d46aeb2be7\" (UID: \"2f5b09b5-923d-4039-ab09-53d46aeb2be7\") " Feb 16 15:20:45 crc kubenswrapper[4835]: I0216 15:20:45.750940 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pctb\" (UniqueName: \"kubernetes.io/projected/2f5b09b5-923d-4039-ab09-53d46aeb2be7-kube-api-access-6pctb\") pod \"2f5b09b5-923d-4039-ab09-53d46aeb2be7\" (UID: \"2f5b09b5-923d-4039-ab09-53d46aeb2be7\") " Feb 16 15:20:45 crc kubenswrapper[4835]: I0216 15:20:45.751919 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f5b09b5-923d-4039-ab09-53d46aeb2be7-utilities" (OuterVolumeSpecName: "utilities") pod "2f5b09b5-923d-4039-ab09-53d46aeb2be7" (UID: "2f5b09b5-923d-4039-ab09-53d46aeb2be7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:20:45 crc kubenswrapper[4835]: I0216 15:20:45.756987 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f5b09b5-923d-4039-ab09-53d46aeb2be7-kube-api-access-6pctb" (OuterVolumeSpecName: "kube-api-access-6pctb") pod "2f5b09b5-923d-4039-ab09-53d46aeb2be7" (UID: "2f5b09b5-923d-4039-ab09-53d46aeb2be7"). InnerVolumeSpecName "kube-api-access-6pctb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:20:45 crc kubenswrapper[4835]: I0216 15:20:45.852956 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f5b09b5-923d-4039-ab09-53d46aeb2be7-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:45 crc kubenswrapper[4835]: I0216 15:20:45.853010 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pctb\" (UniqueName: \"kubernetes.io/projected/2f5b09b5-923d-4039-ab09-53d46aeb2be7-kube-api-access-6pctb\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:45 crc kubenswrapper[4835]: I0216 15:20:45.871368 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f5b09b5-923d-4039-ab09-53d46aeb2be7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f5b09b5-923d-4039-ab09-53d46aeb2be7" (UID: "2f5b09b5-923d-4039-ab09-53d46aeb2be7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:20:45 crc kubenswrapper[4835]: I0216 15:20:45.954389 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f5b09b5-923d-4039-ab09-53d46aeb2be7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:20:46 crc kubenswrapper[4835]: I0216 15:20:46.464565 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmwm9" event={"ID":"2f5b09b5-923d-4039-ab09-53d46aeb2be7","Type":"ContainerDied","Data":"c550f7a50ddd2f4bc5778edae07397102025b43b9db2bab86d2c60b8fb63d58f"} Feb 16 15:20:46 crc kubenswrapper[4835]: I0216 15:20:46.464620 4835 scope.go:117] "RemoveContainer" containerID="099b3bfa28c202e6f1f9e65053ed331f87e9f1f572245b7059f9710c10376352" Feb 16 15:20:46 crc kubenswrapper[4835]: I0216 15:20:46.464752 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmwm9" Feb 16 15:20:46 crc kubenswrapper[4835]: I0216 15:20:46.498687 4835 scope.go:117] "RemoveContainer" containerID="20b0eca6fe384f1bbb6e6a1e1f60d9f083783ce99a2996d5ea3529d2b337fdf4" Feb 16 15:20:46 crc kubenswrapper[4835]: I0216 15:20:46.508582 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmwm9"] Feb 16 15:20:46 crc kubenswrapper[4835]: I0216 15:20:46.510746 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vmwm9"] Feb 16 15:20:46 crc kubenswrapper[4835]: I0216 15:20:46.536886 4835 scope.go:117] "RemoveContainer" containerID="d114b4917ad301ba92a0baf425cea01a547fbe348526d6b19a3ec393428c9bc9" Feb 16 15:20:47 crc kubenswrapper[4835]: I0216 15:20:47.389677 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f5b09b5-923d-4039-ab09-53d46aeb2be7" path="/var/lib/kubelet/pods/2f5b09b5-923d-4039-ab09-53d46aeb2be7/volumes" Feb 16 15:20:49 crc kubenswrapper[4835]: I0216 15:20:49.484177 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx" event={"ID":"1d8c24f5-95ab-48d4-9007-75a10c8e743a","Type":"ContainerStarted","Data":"645a4713d2ac16ef6fe93bfb3ce54f7e10cb7ba1eacee3564eac61db83359857"} Feb 16 15:20:49 crc kubenswrapper[4835]: I0216 15:20:49.484808 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx" Feb 16 15:20:49 crc kubenswrapper[4835]: I0216 15:20:49.486580 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79" event={"ID":"4aa6fafd-b26a-4c22-b680-7123fabb665e","Type":"ContainerStarted","Data":"53860ca8d3329f17769a19de73b8b978c1153587ec5c06603b2d568768e7dbda"} Feb 16 15:20:49 crc kubenswrapper[4835]: I0216 15:20:49.486718 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79" Feb 16 15:20:49 crc kubenswrapper[4835]: I0216 15:20:49.504772 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx" podStartSLOduration=2.004034472 podStartE2EDuration="6.504758155s" podCreationTimestamp="2026-02-16 15:20:43 +0000 UTC" firstStartedPulling="2026-02-16 15:20:44.503257648 +0000 UTC m=+793.795250563" lastFinishedPulling="2026-02-16 15:20:49.003981341 +0000 UTC m=+798.295974246" observedRunningTime="2026-02-16 15:20:49.503241186 +0000 UTC m=+798.795234081" watchObservedRunningTime="2026-02-16 15:20:49.504758155 +0000 UTC m=+798.796751050" Feb 16 15:20:49 crc kubenswrapper[4835]: I0216 15:20:49.540879 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79" podStartSLOduration=1.946856804 podStartE2EDuration="6.540861548s" podCreationTimestamp="2026-02-16 15:20:43 +0000 UTC" firstStartedPulling="2026-02-16 15:20:44.391805707 +0000 UTC m=+793.683798602" lastFinishedPulling="2026-02-16 15:20:48.985810421 +0000 UTC m=+798.277803346" observedRunningTime="2026-02-16 15:20:49.538265271 +0000 UTC m=+798.830258166" watchObservedRunningTime="2026-02-16 15:20:49.540861548 +0000 UTC m=+798.832854443" Feb 16 15:21:04 crc kubenswrapper[4835]: I0216 15:21:04.274561 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-58dccf7f7b-krtpx" Feb 16 15:21:23 crc kubenswrapper[4835]: I0216 15:21:23.907248 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5dccc995f8-z6x79" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.760350 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-b84c9"] Feb 16 15:21:24 crc kubenswrapper[4835]: E0216 15:21:24.763974 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f5b09b5-923d-4039-ab09-53d46aeb2be7" containerName="extract-utilities" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.764005 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f5b09b5-923d-4039-ab09-53d46aeb2be7" containerName="extract-utilities" Feb 16 15:21:24 crc kubenswrapper[4835]: E0216 15:21:24.764023 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f5b09b5-923d-4039-ab09-53d46aeb2be7" containerName="registry-server" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.764030 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f5b09b5-923d-4039-ab09-53d46aeb2be7" containerName="registry-server" Feb 16 15:21:24 crc kubenswrapper[4835]: E0216 15:21:24.764048 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f5b09b5-923d-4039-ab09-53d46aeb2be7" containerName="extract-content" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.764055 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f5b09b5-923d-4039-ab09-53d46aeb2be7" containerName="extract-content" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.764254 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f5b09b5-923d-4039-ab09-53d46aeb2be7" containerName="registry-server" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.765056 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b84c9" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.775443 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.778865 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-prlg8" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.796038 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-wh6l7"] Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.799506 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.802277 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.802671 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.803449 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-b84c9"] Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.852992 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-nwtq4"] Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.854073 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nwtq4" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.855607 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-w9tbp" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.856728 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.856895 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.857361 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.861466 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-f58q9"] Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.862464 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-f58q9" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.863640 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.868821 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-f58q9"] Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.869633 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-frr-sockets\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.869694 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-metrics\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.869723 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9006758e-7767-40e8-a854-d5daeb3d7a2c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-b84c9\" (UID: \"9006758e-7767-40e8-a854-d5daeb3d7a2c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b84c9" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.869749 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-frr-conf\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.869789 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-reloader\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.869814 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5bkz\" (UniqueName: \"kubernetes.io/projected/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-kube-api-access-g5bkz\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.869834 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-metrics-certs\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.869849 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-frr-startup\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.869886 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj55n\" (UniqueName: \"kubernetes.io/projected/9006758e-7767-40e8-a854-d5daeb3d7a2c-kube-api-access-mj55n\") pod \"frr-k8s-webhook-server-78b44bf5bb-b84c9\" (UID: \"9006758e-7767-40e8-a854-d5daeb3d7a2c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b84c9" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971278 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-frr-sockets\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971337 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bm9l\" (UniqueName: \"kubernetes.io/projected/f1616308-8325-4a28-87e2-c72ed44cc83c-kube-api-access-4bm9l\") pod \"speaker-nwtq4\" (UID: \"f1616308-8325-4a28-87e2-c72ed44cc83c\") " pod="metallb-system/speaker-nwtq4" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971372 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0dee595f-4a5b-4986-838c-01782210cb69-metrics-certs\") pod \"controller-69bbfbf88f-f58q9\" (UID: \"0dee595f-4a5b-4986-838c-01782210cb69\") " pod="metallb-system/controller-69bbfbf88f-f58q9" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971393 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-metrics\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971416 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9006758e-7767-40e8-a854-d5daeb3d7a2c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-b84c9\" (UID: \"9006758e-7767-40e8-a854-d5daeb3d7a2c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b84c9" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971437 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk8jm\" (UniqueName: \"kubernetes.io/projected/0dee595f-4a5b-4986-838c-01782210cb69-kube-api-access-fk8jm\") pod \"controller-69bbfbf88f-f58q9\" (UID: \"0dee595f-4a5b-4986-838c-01782210cb69\") " pod="metallb-system/controller-69bbfbf88f-f58q9" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971452 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-frr-conf\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971470 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1616308-8325-4a28-87e2-c72ed44cc83c-metrics-certs\") pod \"speaker-nwtq4\" (UID: \"f1616308-8325-4a28-87e2-c72ed44cc83c\") " pod="metallb-system/speaker-nwtq4" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971573 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f1616308-8325-4a28-87e2-c72ed44cc83c-metallb-excludel2\") pod \"speaker-nwtq4\" (UID: \"f1616308-8325-4a28-87e2-c72ed44cc83c\") " pod="metallb-system/speaker-nwtq4" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971614 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-reloader\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971652 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5bkz\" (UniqueName: \"kubernetes.io/projected/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-kube-api-access-g5bkz\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971672 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-metrics-certs\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971702 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-frr-startup\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971736 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0dee595f-4a5b-4986-838c-01782210cb69-cert\") pod \"controller-69bbfbf88f-f58q9\" (UID: \"0dee595f-4a5b-4986-838c-01782210cb69\") " pod="metallb-system/controller-69bbfbf88f-f58q9" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971792 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj55n\" (UniqueName: \"kubernetes.io/projected/9006758e-7767-40e8-a854-d5daeb3d7a2c-kube-api-access-mj55n\") pod \"frr-k8s-webhook-server-78b44bf5bb-b84c9\" (UID: \"9006758e-7767-40e8-a854-d5daeb3d7a2c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b84c9" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971812 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f1616308-8325-4a28-87e2-c72ed44cc83c-memberlist\") pod \"speaker-nwtq4\" (UID: \"f1616308-8325-4a28-87e2-c72ed44cc83c\") " pod="metallb-system/speaker-nwtq4" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971906 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-frr-conf\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.971911 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-metrics\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: E0216 15:21:24.971667 4835 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 16 15:21:24 crc kubenswrapper[4835]: E0216 15:21:24.972058 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9006758e-7767-40e8-a854-d5daeb3d7a2c-cert podName:9006758e-7767-40e8-a854-d5daeb3d7a2c nodeName:}" failed. No retries permitted until 2026-02-16 15:21:25.472041337 +0000 UTC m=+834.764034232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9006758e-7767-40e8-a854-d5daeb3d7a2c-cert") pod "frr-k8s-webhook-server-78b44bf5bb-b84c9" (UID: "9006758e-7767-40e8-a854-d5daeb3d7a2c") : secret "frr-k8s-webhook-server-cert" not found Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.972796 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-frr-startup\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.972988 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-reloader\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.973024 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-frr-sockets\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.978049 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-metrics-certs\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.993788 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5bkz\" (UniqueName: \"kubernetes.io/projected/cc5ce011-3151-4f6d-98d7-b20df83ff8b3-kube-api-access-g5bkz\") pod \"frr-k8s-wh6l7\" (UID: \"cc5ce011-3151-4f6d-98d7-b20df83ff8b3\") " pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:24 crc kubenswrapper[4835]: I0216 15:21:24.996846 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj55n\" (UniqueName: \"kubernetes.io/projected/9006758e-7767-40e8-a854-d5daeb3d7a2c-kube-api-access-mj55n\") pod \"frr-k8s-webhook-server-78b44bf5bb-b84c9\" (UID: \"9006758e-7767-40e8-a854-d5daeb3d7a2c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b84c9" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.073374 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f1616308-8325-4a28-87e2-c72ed44cc83c-memberlist\") pod \"speaker-nwtq4\" (UID: \"f1616308-8325-4a28-87e2-c72ed44cc83c\") " pod="metallb-system/speaker-nwtq4" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.073446 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bm9l\" (UniqueName: \"kubernetes.io/projected/f1616308-8325-4a28-87e2-c72ed44cc83c-kube-api-access-4bm9l\") pod \"speaker-nwtq4\" (UID: \"f1616308-8325-4a28-87e2-c72ed44cc83c\") " pod="metallb-system/speaker-nwtq4" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.073484 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0dee595f-4a5b-4986-838c-01782210cb69-metrics-certs\") pod \"controller-69bbfbf88f-f58q9\" (UID: \"0dee595f-4a5b-4986-838c-01782210cb69\") " pod="metallb-system/controller-69bbfbf88f-f58q9" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.073561 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk8jm\" (UniqueName: \"kubernetes.io/projected/0dee595f-4a5b-4986-838c-01782210cb69-kube-api-access-fk8jm\") pod \"controller-69bbfbf88f-f58q9\" (UID: \"0dee595f-4a5b-4986-838c-01782210cb69\") " pod="metallb-system/controller-69bbfbf88f-f58q9" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.073590 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1616308-8325-4a28-87e2-c72ed44cc83c-metrics-certs\") pod \"speaker-nwtq4\" (UID: \"f1616308-8325-4a28-87e2-c72ed44cc83c\") " pod="metallb-system/speaker-nwtq4" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.073630 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f1616308-8325-4a28-87e2-c72ed44cc83c-metallb-excludel2\") pod \"speaker-nwtq4\" (UID: \"f1616308-8325-4a28-87e2-c72ed44cc83c\") " pod="metallb-system/speaker-nwtq4" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.073667 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0dee595f-4a5b-4986-838c-01782210cb69-cert\") pod \"controller-69bbfbf88f-f58q9\" (UID: \"0dee595f-4a5b-4986-838c-01782210cb69\") " pod="metallb-system/controller-69bbfbf88f-f58q9" Feb 16 15:21:25 crc kubenswrapper[4835]: E0216 15:21:25.074412 4835 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 15:21:25 crc kubenswrapper[4835]: E0216 15:21:25.074480 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1616308-8325-4a28-87e2-c72ed44cc83c-memberlist podName:f1616308-8325-4a28-87e2-c72ed44cc83c nodeName:}" failed. No retries permitted until 2026-02-16 15:21:25.574460315 +0000 UTC m=+834.866453210 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f1616308-8325-4a28-87e2-c72ed44cc83c-memberlist") pod "speaker-nwtq4" (UID: "f1616308-8325-4a28-87e2-c72ed44cc83c") : secret "metallb-memberlist" not found Feb 16 15:21:25 crc kubenswrapper[4835]: E0216 15:21:25.075210 4835 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 16 15:21:25 crc kubenswrapper[4835]: E0216 15:21:25.075287 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1616308-8325-4a28-87e2-c72ed44cc83c-metrics-certs podName:f1616308-8325-4a28-87e2-c72ed44cc83c nodeName:}" failed. No retries permitted until 2026-02-16 15:21:25.575267856 +0000 UTC m=+834.867260751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f1616308-8325-4a28-87e2-c72ed44cc83c-metrics-certs") pod "speaker-nwtq4" (UID: "f1616308-8325-4a28-87e2-c72ed44cc83c") : secret "speaker-certs-secret" not found Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.076278 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f1616308-8325-4a28-87e2-c72ed44cc83c-metallb-excludel2\") pod \"speaker-nwtq4\" (UID: \"f1616308-8325-4a28-87e2-c72ed44cc83c\") " pod="metallb-system/speaker-nwtq4" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.078950 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0dee595f-4a5b-4986-838c-01782210cb69-metrics-certs\") pod \"controller-69bbfbf88f-f58q9\" (UID: \"0dee595f-4a5b-4986-838c-01782210cb69\") " pod="metallb-system/controller-69bbfbf88f-f58q9" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.079398 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0dee595f-4a5b-4986-838c-01782210cb69-cert\") pod \"controller-69bbfbf88f-f58q9\" (UID: \"0dee595f-4a5b-4986-838c-01782210cb69\") " pod="metallb-system/controller-69bbfbf88f-f58q9" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.092627 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk8jm\" (UniqueName: \"kubernetes.io/projected/0dee595f-4a5b-4986-838c-01782210cb69-kube-api-access-fk8jm\") pod \"controller-69bbfbf88f-f58q9\" (UID: \"0dee595f-4a5b-4986-838c-01782210cb69\") " pod="metallb-system/controller-69bbfbf88f-f58q9" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.094612 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bm9l\" (UniqueName: \"kubernetes.io/projected/f1616308-8325-4a28-87e2-c72ed44cc83c-kube-api-access-4bm9l\") pod \"speaker-nwtq4\" (UID: \"f1616308-8325-4a28-87e2-c72ed44cc83c\") " pod="metallb-system/speaker-nwtq4" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.118712 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.187299 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-f58q9" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.442174 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-f58q9"] Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.478547 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9006758e-7767-40e8-a854-d5daeb3d7a2c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-b84c9\" (UID: \"9006758e-7767-40e8-a854-d5daeb3d7a2c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b84c9" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.485292 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9006758e-7767-40e8-a854-d5daeb3d7a2c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-b84c9\" (UID: \"9006758e-7767-40e8-a854-d5daeb3d7a2c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b84c9" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.579592 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1616308-8325-4a28-87e2-c72ed44cc83c-metrics-certs\") pod \"speaker-nwtq4\" (UID: \"f1616308-8325-4a28-87e2-c72ed44cc83c\") " pod="metallb-system/speaker-nwtq4" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.579660 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f1616308-8325-4a28-87e2-c72ed44cc83c-memberlist\") pod \"speaker-nwtq4\" (UID: \"f1616308-8325-4a28-87e2-c72ed44cc83c\") " pod="metallb-system/speaker-nwtq4" Feb 16 15:21:25 crc kubenswrapper[4835]: E0216 15:21:25.579774 4835 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 15:21:25 crc kubenswrapper[4835]: E0216 15:21:25.579816 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1616308-8325-4a28-87e2-c72ed44cc83c-memberlist podName:f1616308-8325-4a28-87e2-c72ed44cc83c nodeName:}" failed. No retries permitted until 2026-02-16 15:21:26.579803387 +0000 UTC m=+835.871796282 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f1616308-8325-4a28-87e2-c72ed44cc83c-memberlist") pod "speaker-nwtq4" (UID: "f1616308-8325-4a28-87e2-c72ed44cc83c") : secret "metallb-memberlist" not found Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.584935 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1616308-8325-4a28-87e2-c72ed44cc83c-metrics-certs\") pod \"speaker-nwtq4\" (UID: \"f1616308-8325-4a28-87e2-c72ed44cc83c\") " pod="metallb-system/speaker-nwtq4" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.689993 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b84c9" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.790006 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wh6l7" event={"ID":"cc5ce011-3151-4f6d-98d7-b20df83ff8b3","Type":"ContainerStarted","Data":"5726ee74d4eae4e4c8fa350d4c9004c34fc48b59a4ec405afbae3de42e0aee6a"} Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.792072 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-f58q9" event={"ID":"0dee595f-4a5b-4986-838c-01782210cb69","Type":"ContainerStarted","Data":"46f5fafb77a9ce13049e99e26b1ef3ef312c62fcfdbc68326f12d2ee546f47fa"} Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.792113 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-f58q9" event={"ID":"0dee595f-4a5b-4986-838c-01782210cb69","Type":"ContainerStarted","Data":"0f9ee0b47c80d5d84cfe587e3a73ef348d0b6dedfa093695cf5192ef3cf34dd7"} Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.792126 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-f58q9" event={"ID":"0dee595f-4a5b-4986-838c-01782210cb69","Type":"ContainerStarted","Data":"eaee9592b6ee568af15900a85e2dca181127d1321d02f712096d6305d7c46838"} Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.792172 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-f58q9" Feb 16 15:21:25 crc kubenswrapper[4835]: I0216 15:21:25.806691 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-f58q9" podStartSLOduration=1.80665795 podStartE2EDuration="1.80665795s" podCreationTimestamp="2026-02-16 15:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:21:25.804872884 +0000 UTC m=+835.096865799" watchObservedRunningTime="2026-02-16 15:21:25.80665795 +0000 UTC m=+835.098650845" Feb 16 15:21:26 crc kubenswrapper[4835]: I0216 15:21:26.090007 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-b84c9"] Feb 16 15:21:26 crc kubenswrapper[4835]: I0216 15:21:26.594926 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f1616308-8325-4a28-87e2-c72ed44cc83c-memberlist\") pod \"speaker-nwtq4\" (UID: \"f1616308-8325-4a28-87e2-c72ed44cc83c\") " pod="metallb-system/speaker-nwtq4" Feb 16 15:21:26 crc kubenswrapper[4835]: I0216 15:21:26.603347 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f1616308-8325-4a28-87e2-c72ed44cc83c-memberlist\") pod \"speaker-nwtq4\" (UID: \"f1616308-8325-4a28-87e2-c72ed44cc83c\") " pod="metallb-system/speaker-nwtq4" Feb 16 15:21:26 crc kubenswrapper[4835]: I0216 15:21:26.671154 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nwtq4" Feb 16 15:21:26 crc kubenswrapper[4835]: W0216 15:21:26.704311 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1616308_8325_4a28_87e2_c72ed44cc83c.slice/crio-eadca366d9cc67bfb3e177f80497166f958d52845231e7bde11e3bb91cf94942 WatchSource:0}: Error finding container eadca366d9cc67bfb3e177f80497166f958d52845231e7bde11e3bb91cf94942: Status 404 returned error can't find the container with id eadca366d9cc67bfb3e177f80497166f958d52845231e7bde11e3bb91cf94942 Feb 16 15:21:26 crc kubenswrapper[4835]: I0216 15:21:26.801126 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nwtq4" event={"ID":"f1616308-8325-4a28-87e2-c72ed44cc83c","Type":"ContainerStarted","Data":"eadca366d9cc67bfb3e177f80497166f958d52845231e7bde11e3bb91cf94942"} Feb 16 15:21:26 crc kubenswrapper[4835]: I0216 15:21:26.802297 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b84c9" event={"ID":"9006758e-7767-40e8-a854-d5daeb3d7a2c","Type":"ContainerStarted","Data":"a30dcb8e7880a7d032a17cf733839e1039aa2ff416a6b0afaa309120e592e136"} Feb 16 15:21:27 crc kubenswrapper[4835]: I0216 15:21:27.812026 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nwtq4" event={"ID":"f1616308-8325-4a28-87e2-c72ed44cc83c","Type":"ContainerStarted","Data":"4d7a1ea807a366df432cdcbd0c0d01d55aa48a4d22d99e1874bf6fe37cc7fe3f"} Feb 16 15:21:27 crc kubenswrapper[4835]: I0216 15:21:27.813583 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-nwtq4" Feb 16 15:21:27 crc kubenswrapper[4835]: I0216 15:21:27.813702 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nwtq4" event={"ID":"f1616308-8325-4a28-87e2-c72ed44cc83c","Type":"ContainerStarted","Data":"65fb11d8a56d1a245fa581c6467d3f31732e03f7e2e5b6ea171939c6086dc038"} Feb 16 15:21:27 crc kubenswrapper[4835]: I0216 15:21:27.831169 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-nwtq4" podStartSLOduration=3.831153069 podStartE2EDuration="3.831153069s" podCreationTimestamp="2026-02-16 15:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:21:27.828956092 +0000 UTC m=+837.120948987" watchObservedRunningTime="2026-02-16 15:21:27.831153069 +0000 UTC m=+837.123145964" Feb 16 15:21:33 crc kubenswrapper[4835]: I0216 15:21:33.860910 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b84c9" event={"ID":"9006758e-7767-40e8-a854-d5daeb3d7a2c","Type":"ContainerStarted","Data":"33ecd7b405d477b2ab048eb445118e55e7b8e1d21d4b33d254305223c4f5a228"} Feb 16 15:21:33 crc kubenswrapper[4835]: I0216 15:21:33.861509 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b84c9" Feb 16 15:21:33 crc kubenswrapper[4835]: I0216 15:21:33.865011 4835 generic.go:334] "Generic (PLEG): container finished" podID="cc5ce011-3151-4f6d-98d7-b20df83ff8b3" containerID="fcaa78a3f82a887d13556505a9bf5bfd9afb13559a43e3acabe6d8b7fd78e230" exitCode=0 Feb 16 15:21:33 crc kubenswrapper[4835]: I0216 15:21:33.865056 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wh6l7" event={"ID":"cc5ce011-3151-4f6d-98d7-b20df83ff8b3","Type":"ContainerDied","Data":"fcaa78a3f82a887d13556505a9bf5bfd9afb13559a43e3acabe6d8b7fd78e230"} Feb 16 15:21:33 crc kubenswrapper[4835]: I0216 15:21:33.883120 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b84c9" podStartSLOduration=3.14216923 podStartE2EDuration="9.883100897s" podCreationTimestamp="2026-02-16 15:21:24 +0000 UTC" firstStartedPulling="2026-02-16 15:21:26.096708268 +0000 UTC m=+835.388701163" lastFinishedPulling="2026-02-16 15:21:32.837639935 +0000 UTC m=+842.129632830" observedRunningTime="2026-02-16 15:21:33.881214739 +0000 UTC m=+843.173207674" watchObservedRunningTime="2026-02-16 15:21:33.883100897 +0000 UTC m=+843.175093792" Feb 16 15:21:34 crc kubenswrapper[4835]: I0216 15:21:34.875605 4835 generic.go:334] "Generic (PLEG): container finished" podID="cc5ce011-3151-4f6d-98d7-b20df83ff8b3" containerID="8dfbfb971b975c0df23932cb086698d46c751f0ef4cbe68f57d0eca20aeb62dc" exitCode=0 Feb 16 15:21:34 crc kubenswrapper[4835]: I0216 15:21:34.875953 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wh6l7" event={"ID":"cc5ce011-3151-4f6d-98d7-b20df83ff8b3","Type":"ContainerDied","Data":"8dfbfb971b975c0df23932cb086698d46c751f0ef4cbe68f57d0eca20aeb62dc"} Feb 16 15:21:35 crc kubenswrapper[4835]: I0216 15:21:35.193647 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-f58q9" Feb 16 15:21:35 crc kubenswrapper[4835]: I0216 15:21:35.884272 4835 generic.go:334] "Generic (PLEG): container finished" podID="cc5ce011-3151-4f6d-98d7-b20df83ff8b3" containerID="f4a2e04854fbdee79f21d533c5ea35890eb82d8358f7f2dbc796debfec026e62" exitCode=0 Feb 16 15:21:35 crc kubenswrapper[4835]: I0216 15:21:35.884303 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wh6l7" event={"ID":"cc5ce011-3151-4f6d-98d7-b20df83ff8b3","Type":"ContainerDied","Data":"f4a2e04854fbdee79f21d533c5ea35890eb82d8358f7f2dbc796debfec026e62"} Feb 16 15:21:36 crc kubenswrapper[4835]: I0216 15:21:36.677388 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-nwtq4" Feb 16 15:21:36 crc kubenswrapper[4835]: I0216 15:21:36.893359 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wh6l7" event={"ID":"cc5ce011-3151-4f6d-98d7-b20df83ff8b3","Type":"ContainerStarted","Data":"c813d7088a90b261508f9ccbbe68840d98e9dad257aedd893093442e52501d52"} Feb 16 15:21:36 crc kubenswrapper[4835]: I0216 15:21:36.893674 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wh6l7" event={"ID":"cc5ce011-3151-4f6d-98d7-b20df83ff8b3","Type":"ContainerStarted","Data":"893f594c2595ec38aa1ea52d8a8fb97880e3793f4752ff6162b6255832e64d7c"} Feb 16 15:21:36 crc kubenswrapper[4835]: I0216 15:21:36.893683 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wh6l7" event={"ID":"cc5ce011-3151-4f6d-98d7-b20df83ff8b3","Type":"ContainerStarted","Data":"8a50ca788ca4fb4f03d6e73901a4ba1167a5081e729f2cfa41e258c5babcf950"} Feb 16 15:21:36 crc kubenswrapper[4835]: I0216 15:21:36.893691 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wh6l7" event={"ID":"cc5ce011-3151-4f6d-98d7-b20df83ff8b3","Type":"ContainerStarted","Data":"2773608a376f3c2ca4684eea02cd97a42b75e08bee12c322e3247c7ebc9d9053"} Feb 16 15:21:36 crc kubenswrapper[4835]: I0216 15:21:36.893700 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wh6l7" event={"ID":"cc5ce011-3151-4f6d-98d7-b20df83ff8b3","Type":"ContainerStarted","Data":"1dab8b5fd16974b4ceaeaf7bb0c81dcf305b327f0dce68b06ed1a3d304adc85c"} Feb 16 15:21:37 crc kubenswrapper[4835]: I0216 15:21:37.902323 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wh6l7" event={"ID":"cc5ce011-3151-4f6d-98d7-b20df83ff8b3","Type":"ContainerStarted","Data":"a436c9f560c2f8fbb0a7343a081a17dd2409a201f89ef525c465d62f0887a40f"} Feb 16 15:21:37 crc kubenswrapper[4835]: I0216 15:21:37.902488 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:37 crc kubenswrapper[4835]: I0216 15:21:37.921224 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-wh6l7" podStartSLOduration=6.328327659 podStartE2EDuration="13.921210468s" podCreationTimestamp="2026-02-16 15:21:24 +0000 UTC" firstStartedPulling="2026-02-16 15:21:25.223260081 +0000 UTC m=+834.515252986" lastFinishedPulling="2026-02-16 15:21:32.8161429 +0000 UTC m=+842.108135795" observedRunningTime="2026-02-16 15:21:37.921048284 +0000 UTC m=+847.213041179" watchObservedRunningTime="2026-02-16 15:21:37.921210468 +0000 UTC m=+847.213203363" Feb 16 15:21:39 crc kubenswrapper[4835]: I0216 15:21:39.802088 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-p84sp"] Feb 16 15:21:39 crc kubenswrapper[4835]: I0216 15:21:39.803241 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-p84sp" Feb 16 15:21:39 crc kubenswrapper[4835]: I0216 15:21:39.805160 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 15:21:39 crc kubenswrapper[4835]: I0216 15:21:39.805610 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-cztvr" Feb 16 15:21:39 crc kubenswrapper[4835]: I0216 15:21:39.805794 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 15:21:39 crc kubenswrapper[4835]: I0216 15:21:39.834469 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-p84sp"] Feb 16 15:21:39 crc kubenswrapper[4835]: I0216 15:21:39.905633 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxtj6\" (UniqueName: \"kubernetes.io/projected/465bac28-e6d6-4d64-ae01-934e6c250b4a-kube-api-access-gxtj6\") pod \"openstack-operator-index-p84sp\" (UID: \"465bac28-e6d6-4d64-ae01-934e6c250b4a\") " pod="openstack-operators/openstack-operator-index-p84sp" Feb 16 15:21:40 crc kubenswrapper[4835]: I0216 15:21:40.007429 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxtj6\" (UniqueName: \"kubernetes.io/projected/465bac28-e6d6-4d64-ae01-934e6c250b4a-kube-api-access-gxtj6\") pod \"openstack-operator-index-p84sp\" (UID: \"465bac28-e6d6-4d64-ae01-934e6c250b4a\") " pod="openstack-operators/openstack-operator-index-p84sp" Feb 16 15:21:40 crc kubenswrapper[4835]: I0216 15:21:40.025094 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxtj6\" (UniqueName: \"kubernetes.io/projected/465bac28-e6d6-4d64-ae01-934e6c250b4a-kube-api-access-gxtj6\") pod \"openstack-operator-index-p84sp\" (UID: \"465bac28-e6d6-4d64-ae01-934e6c250b4a\") " pod="openstack-operators/openstack-operator-index-p84sp" Feb 16 15:21:40 crc kubenswrapper[4835]: I0216 15:21:40.120257 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:40 crc kubenswrapper[4835]: I0216 15:21:40.140607 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-p84sp" Feb 16 15:21:40 crc kubenswrapper[4835]: I0216 15:21:40.157155 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:40 crc kubenswrapper[4835]: I0216 15:21:40.341693 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-p84sp"] Feb 16 15:21:40 crc kubenswrapper[4835]: W0216 15:21:40.345107 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod465bac28_e6d6_4d64_ae01_934e6c250b4a.slice/crio-0b16cf35f5d3f3fb5b35cab37048db53b5ce82dc43032bb7f0871f10aab38079 WatchSource:0}: Error finding container 0b16cf35f5d3f3fb5b35cab37048db53b5ce82dc43032bb7f0871f10aab38079: Status 404 returned error can't find the container with id 0b16cf35f5d3f3fb5b35cab37048db53b5ce82dc43032bb7f0871f10aab38079 Feb 16 15:21:40 crc kubenswrapper[4835]: I0216 15:21:40.924802 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-p84sp" event={"ID":"465bac28-e6d6-4d64-ae01-934e6c250b4a","Type":"ContainerStarted","Data":"0b16cf35f5d3f3fb5b35cab37048db53b5ce82dc43032bb7f0871f10aab38079"} Feb 16 15:21:42 crc kubenswrapper[4835]: I0216 15:21:42.939715 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-p84sp" event={"ID":"465bac28-e6d6-4d64-ae01-934e6c250b4a","Type":"ContainerStarted","Data":"57644e3db4921eb51ee97a013332847d1acbaca9f6127957c82902325b7c9304"} Feb 16 15:21:43 crc kubenswrapper[4835]: I0216 15:21:43.183721 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-p84sp" podStartSLOduration=1.992682033 podStartE2EDuration="4.183697618s" podCreationTimestamp="2026-02-16 15:21:39 +0000 UTC" firstStartedPulling="2026-02-16 15:21:40.348403443 +0000 UTC m=+849.640396348" lastFinishedPulling="2026-02-16 15:21:42.539419038 +0000 UTC m=+851.831411933" observedRunningTime="2026-02-16 15:21:42.958855468 +0000 UTC m=+852.250848363" watchObservedRunningTime="2026-02-16 15:21:43.183697618 +0000 UTC m=+852.475690523" Feb 16 15:21:43 crc kubenswrapper[4835]: I0216 15:21:43.185564 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-p84sp"] Feb 16 15:21:43 crc kubenswrapper[4835]: I0216 15:21:43.795771 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-td85r"] Feb 16 15:21:43 crc kubenswrapper[4835]: I0216 15:21:43.797156 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-td85r" Feb 16 15:21:43 crc kubenswrapper[4835]: I0216 15:21:43.807439 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-td85r"] Feb 16 15:21:43 crc kubenswrapper[4835]: I0216 15:21:43.873928 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkvsx\" (UniqueName: \"kubernetes.io/projected/eb1bcd66-4bdb-42e3-be22-bf9752941ecc-kube-api-access-fkvsx\") pod \"openstack-operator-index-td85r\" (UID: \"eb1bcd66-4bdb-42e3-be22-bf9752941ecc\") " pod="openstack-operators/openstack-operator-index-td85r" Feb 16 15:21:43 crc kubenswrapper[4835]: I0216 15:21:43.976440 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkvsx\" (UniqueName: \"kubernetes.io/projected/eb1bcd66-4bdb-42e3-be22-bf9752941ecc-kube-api-access-fkvsx\") pod \"openstack-operator-index-td85r\" (UID: \"eb1bcd66-4bdb-42e3-be22-bf9752941ecc\") " pod="openstack-operators/openstack-operator-index-td85r" Feb 16 15:21:44 crc kubenswrapper[4835]: I0216 15:21:43.999974 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkvsx\" (UniqueName: \"kubernetes.io/projected/eb1bcd66-4bdb-42e3-be22-bf9752941ecc-kube-api-access-fkvsx\") pod \"openstack-operator-index-td85r\" (UID: \"eb1bcd66-4bdb-42e3-be22-bf9752941ecc\") " pod="openstack-operators/openstack-operator-index-td85r" Feb 16 15:21:44 crc kubenswrapper[4835]: I0216 15:21:44.119485 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-td85r" Feb 16 15:21:44 crc kubenswrapper[4835]: I0216 15:21:44.598315 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-td85r"] Feb 16 15:21:44 crc kubenswrapper[4835]: I0216 15:21:44.956931 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-td85r" event={"ID":"eb1bcd66-4bdb-42e3-be22-bf9752941ecc","Type":"ContainerStarted","Data":"52c8d4a1fd2aa9801ec44056bc262c75cd36475ef2bb4ed75c950b800e964268"} Feb 16 15:21:44 crc kubenswrapper[4835]: I0216 15:21:44.956994 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-td85r" event={"ID":"eb1bcd66-4bdb-42e3-be22-bf9752941ecc","Type":"ContainerStarted","Data":"c9ec1b3e5f7afd1c3415f02672a48edfb42f6b999b9d0cf2d50031e5dd464103"} Feb 16 15:21:44 crc kubenswrapper[4835]: I0216 15:21:44.956989 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-p84sp" podUID="465bac28-e6d6-4d64-ae01-934e6c250b4a" containerName="registry-server" containerID="cri-o://57644e3db4921eb51ee97a013332847d1acbaca9f6127957c82902325b7c9304" gracePeriod=2 Feb 16 15:21:44 crc kubenswrapper[4835]: I0216 15:21:44.984604 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-td85r" podStartSLOduration=1.9216604579999998 podStartE2EDuration="1.984524707s" podCreationTimestamp="2026-02-16 15:21:43 +0000 UTC" firstStartedPulling="2026-02-16 15:21:44.611817186 +0000 UTC m=+853.903810081" lastFinishedPulling="2026-02-16 15:21:44.674681435 +0000 UTC m=+853.966674330" observedRunningTime="2026-02-16 15:21:44.976696388 +0000 UTC m=+854.268689303" watchObservedRunningTime="2026-02-16 15:21:44.984524707 +0000 UTC m=+854.276517642" Feb 16 15:21:45 crc kubenswrapper[4835]: I0216 15:21:45.391941 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-p84sp" Feb 16 15:21:45 crc kubenswrapper[4835]: I0216 15:21:45.496966 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxtj6\" (UniqueName: \"kubernetes.io/projected/465bac28-e6d6-4d64-ae01-934e6c250b4a-kube-api-access-gxtj6\") pod \"465bac28-e6d6-4d64-ae01-934e6c250b4a\" (UID: \"465bac28-e6d6-4d64-ae01-934e6c250b4a\") " Feb 16 15:21:45 crc kubenswrapper[4835]: I0216 15:21:45.503323 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/465bac28-e6d6-4d64-ae01-934e6c250b4a-kube-api-access-gxtj6" (OuterVolumeSpecName: "kube-api-access-gxtj6") pod "465bac28-e6d6-4d64-ae01-934e6c250b4a" (UID: "465bac28-e6d6-4d64-ae01-934e6c250b4a"). InnerVolumeSpecName "kube-api-access-gxtj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:21:45 crc kubenswrapper[4835]: I0216 15:21:45.598677 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxtj6\" (UniqueName: \"kubernetes.io/projected/465bac28-e6d6-4d64-ae01-934e6c250b4a-kube-api-access-gxtj6\") on node \"crc\" DevicePath \"\"" Feb 16 15:21:45 crc kubenswrapper[4835]: I0216 15:21:45.694232 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b84c9" Feb 16 15:21:45 crc kubenswrapper[4835]: I0216 15:21:45.966805 4835 generic.go:334] "Generic (PLEG): container finished" podID="465bac28-e6d6-4d64-ae01-934e6c250b4a" containerID="57644e3db4921eb51ee97a013332847d1acbaca9f6127957c82902325b7c9304" exitCode=0 Feb 16 15:21:45 crc kubenswrapper[4835]: I0216 15:21:45.966864 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-p84sp" event={"ID":"465bac28-e6d6-4d64-ae01-934e6c250b4a","Type":"ContainerDied","Data":"57644e3db4921eb51ee97a013332847d1acbaca9f6127957c82902325b7c9304"} Feb 16 15:21:45 crc kubenswrapper[4835]: I0216 15:21:45.966909 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-p84sp" event={"ID":"465bac28-e6d6-4d64-ae01-934e6c250b4a","Type":"ContainerDied","Data":"0b16cf35f5d3f3fb5b35cab37048db53b5ce82dc43032bb7f0871f10aab38079"} Feb 16 15:21:45 crc kubenswrapper[4835]: I0216 15:21:45.966914 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-p84sp" Feb 16 15:21:45 crc kubenswrapper[4835]: I0216 15:21:45.966928 4835 scope.go:117] "RemoveContainer" containerID="57644e3db4921eb51ee97a013332847d1acbaca9f6127957c82902325b7c9304" Feb 16 15:21:45 crc kubenswrapper[4835]: I0216 15:21:45.994279 4835 scope.go:117] "RemoveContainer" containerID="57644e3db4921eb51ee97a013332847d1acbaca9f6127957c82902325b7c9304" Feb 16 15:21:45 crc kubenswrapper[4835]: E0216 15:21:45.994957 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57644e3db4921eb51ee97a013332847d1acbaca9f6127957c82902325b7c9304\": container with ID starting with 57644e3db4921eb51ee97a013332847d1acbaca9f6127957c82902325b7c9304 not found: ID does not exist" containerID="57644e3db4921eb51ee97a013332847d1acbaca9f6127957c82902325b7c9304" Feb 16 15:21:45 crc kubenswrapper[4835]: I0216 15:21:45.995053 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57644e3db4921eb51ee97a013332847d1acbaca9f6127957c82902325b7c9304"} err="failed to get container status \"57644e3db4921eb51ee97a013332847d1acbaca9f6127957c82902325b7c9304\": rpc error: code = NotFound desc = could not find container \"57644e3db4921eb51ee97a013332847d1acbaca9f6127957c82902325b7c9304\": container with ID starting with 57644e3db4921eb51ee97a013332847d1acbaca9f6127957c82902325b7c9304 not found: ID does not exist" Feb 16 15:21:46 crc kubenswrapper[4835]: I0216 15:21:46.021395 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-p84sp"] Feb 16 15:21:46 crc kubenswrapper[4835]: I0216 15:21:46.030259 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-p84sp"] Feb 16 15:21:47 crc kubenswrapper[4835]: I0216 15:21:47.394009 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="465bac28-e6d6-4d64-ae01-934e6c250b4a" path="/var/lib/kubelet/pods/465bac28-e6d6-4d64-ae01-934e6c250b4a/volumes" Feb 16 15:21:54 crc kubenswrapper[4835]: I0216 15:21:54.120318 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-td85r" Feb 16 15:21:54 crc kubenswrapper[4835]: I0216 15:21:54.121018 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-td85r" Feb 16 15:21:54 crc kubenswrapper[4835]: I0216 15:21:54.151861 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-td85r" Feb 16 15:21:55 crc kubenswrapper[4835]: I0216 15:21:55.070274 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-td85r" Feb 16 15:21:55 crc kubenswrapper[4835]: I0216 15:21:55.123282 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-wh6l7" Feb 16 15:21:56 crc kubenswrapper[4835]: I0216 15:21:56.040424 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj"] Feb 16 15:21:56 crc kubenswrapper[4835]: E0216 15:21:56.041307 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465bac28-e6d6-4d64-ae01-934e6c250b4a" containerName="registry-server" Feb 16 15:21:56 crc kubenswrapper[4835]: I0216 15:21:56.041480 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="465bac28-e6d6-4d64-ae01-934e6c250b4a" containerName="registry-server" Feb 16 15:21:56 crc kubenswrapper[4835]: I0216 15:21:56.041976 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="465bac28-e6d6-4d64-ae01-934e6c250b4a" containerName="registry-server" Feb 16 15:21:56 crc kubenswrapper[4835]: I0216 15:21:56.044375 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" Feb 16 15:21:56 crc kubenswrapper[4835]: I0216 15:21:56.049053 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-gw76w" Feb 16 15:21:56 crc kubenswrapper[4835]: I0216 15:21:56.053356 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj"] Feb 16 15:21:56 crc kubenswrapper[4835]: I0216 15:21:56.141935 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/452942eb-5cd7-495f-88e4-9f5a272569e3-util\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj\" (UID: \"452942eb-5cd7-495f-88e4-9f5a272569e3\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" Feb 16 15:21:56 crc kubenswrapper[4835]: I0216 15:21:56.142068 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trvhs\" (UniqueName: \"kubernetes.io/projected/452942eb-5cd7-495f-88e4-9f5a272569e3-kube-api-access-trvhs\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj\" (UID: \"452942eb-5cd7-495f-88e4-9f5a272569e3\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" Feb 16 15:21:56 crc kubenswrapper[4835]: I0216 15:21:56.142294 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/452942eb-5cd7-495f-88e4-9f5a272569e3-bundle\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj\" (UID: \"452942eb-5cd7-495f-88e4-9f5a272569e3\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" Feb 16 15:21:56 crc kubenswrapper[4835]: I0216 15:21:56.244180 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/452942eb-5cd7-495f-88e4-9f5a272569e3-util\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj\" (UID: \"452942eb-5cd7-495f-88e4-9f5a272569e3\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" Feb 16 15:21:56 crc kubenswrapper[4835]: I0216 15:21:56.244249 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trvhs\" (UniqueName: \"kubernetes.io/projected/452942eb-5cd7-495f-88e4-9f5a272569e3-kube-api-access-trvhs\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj\" (UID: \"452942eb-5cd7-495f-88e4-9f5a272569e3\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" Feb 16 15:21:56 crc kubenswrapper[4835]: I0216 15:21:56.244302 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/452942eb-5cd7-495f-88e4-9f5a272569e3-bundle\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj\" (UID: \"452942eb-5cd7-495f-88e4-9f5a272569e3\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" Feb 16 15:21:56 crc kubenswrapper[4835]: I0216 15:21:56.244756 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/452942eb-5cd7-495f-88e4-9f5a272569e3-util\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj\" (UID: \"452942eb-5cd7-495f-88e4-9f5a272569e3\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" Feb 16 15:21:56 crc kubenswrapper[4835]: I0216 15:21:56.244772 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/452942eb-5cd7-495f-88e4-9f5a272569e3-bundle\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj\" (UID: \"452942eb-5cd7-495f-88e4-9f5a272569e3\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" Feb 16 15:21:56 crc kubenswrapper[4835]: I0216 15:21:56.264204 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trvhs\" (UniqueName: \"kubernetes.io/projected/452942eb-5cd7-495f-88e4-9f5a272569e3-kube-api-access-trvhs\") pod \"60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj\" (UID: \"452942eb-5cd7-495f-88e4-9f5a272569e3\") " pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" Feb 16 15:21:56 crc kubenswrapper[4835]: I0216 15:21:56.362645 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" Feb 16 15:21:57 crc kubenswrapper[4835]: I0216 15:21:57.141776 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj"] Feb 16 15:21:57 crc kubenswrapper[4835]: W0216 15:21:57.153695 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod452942eb_5cd7_495f_88e4_9f5a272569e3.slice/crio-805ba477e64c1f8b16f77434f305df2eda294a89b9e1a74fed680cef5c99e8ea WatchSource:0}: Error finding container 805ba477e64c1f8b16f77434f305df2eda294a89b9e1a74fed680cef5c99e8ea: Status 404 returned error can't find the container with id 805ba477e64c1f8b16f77434f305df2eda294a89b9e1a74fed680cef5c99e8ea Feb 16 15:21:58 crc kubenswrapper[4835]: I0216 15:21:58.057716 4835 generic.go:334] "Generic (PLEG): container finished" podID="452942eb-5cd7-495f-88e4-9f5a272569e3" containerID="fd33ced58f1950f3d784002a49f3504c9c21205b74260ea23bf2f633191a9443" exitCode=0 Feb 16 15:21:58 crc kubenswrapper[4835]: I0216 15:21:58.057958 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" event={"ID":"452942eb-5cd7-495f-88e4-9f5a272569e3","Type":"ContainerDied","Data":"fd33ced58f1950f3d784002a49f3504c9c21205b74260ea23bf2f633191a9443"} Feb 16 15:21:58 crc kubenswrapper[4835]: I0216 15:21:58.058037 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" event={"ID":"452942eb-5cd7-495f-88e4-9f5a272569e3","Type":"ContainerStarted","Data":"805ba477e64c1f8b16f77434f305df2eda294a89b9e1a74fed680cef5c99e8ea"} Feb 16 15:21:59 crc kubenswrapper[4835]: I0216 15:21:59.072035 4835 generic.go:334] "Generic (PLEG): container finished" podID="452942eb-5cd7-495f-88e4-9f5a272569e3" containerID="1dd5838cc0c9a4b900f863c873ddf04648e6a9361f7ccdb531fdd432c1b3f39f" exitCode=0 Feb 16 15:21:59 crc kubenswrapper[4835]: I0216 15:21:59.072167 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" event={"ID":"452942eb-5cd7-495f-88e4-9f5a272569e3","Type":"ContainerDied","Data":"1dd5838cc0c9a4b900f863c873ddf04648e6a9361f7ccdb531fdd432c1b3f39f"} Feb 16 15:22:00 crc kubenswrapper[4835]: I0216 15:22:00.081493 4835 generic.go:334] "Generic (PLEG): container finished" podID="452942eb-5cd7-495f-88e4-9f5a272569e3" containerID="9aabaa5763cf4ffe114e29c8fa1474d9c8ac6953bc13af98cea1f2723370b50d" exitCode=0 Feb 16 15:22:00 crc kubenswrapper[4835]: I0216 15:22:00.081550 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" event={"ID":"452942eb-5cd7-495f-88e4-9f5a272569e3","Type":"ContainerDied","Data":"9aabaa5763cf4ffe114e29c8fa1474d9c8ac6953bc13af98cea1f2723370b50d"} Feb 16 15:22:01 crc kubenswrapper[4835]: I0216 15:22:01.334371 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" Feb 16 15:22:01 crc kubenswrapper[4835]: I0216 15:22:01.522459 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/452942eb-5cd7-495f-88e4-9f5a272569e3-util\") pod \"452942eb-5cd7-495f-88e4-9f5a272569e3\" (UID: \"452942eb-5cd7-495f-88e4-9f5a272569e3\") " Feb 16 15:22:01 crc kubenswrapper[4835]: I0216 15:22:01.522544 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/452942eb-5cd7-495f-88e4-9f5a272569e3-bundle\") pod \"452942eb-5cd7-495f-88e4-9f5a272569e3\" (UID: \"452942eb-5cd7-495f-88e4-9f5a272569e3\") " Feb 16 15:22:01 crc kubenswrapper[4835]: I0216 15:22:01.522582 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trvhs\" (UniqueName: \"kubernetes.io/projected/452942eb-5cd7-495f-88e4-9f5a272569e3-kube-api-access-trvhs\") pod \"452942eb-5cd7-495f-88e4-9f5a272569e3\" (UID: \"452942eb-5cd7-495f-88e4-9f5a272569e3\") " Feb 16 15:22:01 crc kubenswrapper[4835]: I0216 15:22:01.523319 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/452942eb-5cd7-495f-88e4-9f5a272569e3-bundle" (OuterVolumeSpecName: "bundle") pod "452942eb-5cd7-495f-88e4-9f5a272569e3" (UID: "452942eb-5cd7-495f-88e4-9f5a272569e3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:22:01 crc kubenswrapper[4835]: I0216 15:22:01.533703 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/452942eb-5cd7-495f-88e4-9f5a272569e3-kube-api-access-trvhs" (OuterVolumeSpecName: "kube-api-access-trvhs") pod "452942eb-5cd7-495f-88e4-9f5a272569e3" (UID: "452942eb-5cd7-495f-88e4-9f5a272569e3"). InnerVolumeSpecName "kube-api-access-trvhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:22:01 crc kubenswrapper[4835]: I0216 15:22:01.538754 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/452942eb-5cd7-495f-88e4-9f5a272569e3-util" (OuterVolumeSpecName: "util") pod "452942eb-5cd7-495f-88e4-9f5a272569e3" (UID: "452942eb-5cd7-495f-88e4-9f5a272569e3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:22:01 crc kubenswrapper[4835]: I0216 15:22:01.623767 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/452942eb-5cd7-495f-88e4-9f5a272569e3-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:22:01 crc kubenswrapper[4835]: I0216 15:22:01.623808 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trvhs\" (UniqueName: \"kubernetes.io/projected/452942eb-5cd7-495f-88e4-9f5a272569e3-kube-api-access-trvhs\") on node \"crc\" DevicePath \"\"" Feb 16 15:22:01 crc kubenswrapper[4835]: I0216 15:22:01.623823 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/452942eb-5cd7-495f-88e4-9f5a272569e3-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:22:02 crc kubenswrapper[4835]: I0216 15:22:02.093642 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" event={"ID":"452942eb-5cd7-495f-88e4-9f5a272569e3","Type":"ContainerDied","Data":"805ba477e64c1f8b16f77434f305df2eda294a89b9e1a74fed680cef5c99e8ea"} Feb 16 15:22:02 crc kubenswrapper[4835]: I0216 15:22:02.094030 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="805ba477e64c1f8b16f77434f305df2eda294a89b9e1a74fed680cef5c99e8ea" Feb 16 15:22:02 crc kubenswrapper[4835]: I0216 15:22:02.093691 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj" Feb 16 15:22:10 crc kubenswrapper[4835]: I0216 15:22:10.610846 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5857b4c744-wcnkx"] Feb 16 15:22:10 crc kubenswrapper[4835]: E0216 15:22:10.611925 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452942eb-5cd7-495f-88e4-9f5a272569e3" containerName="pull" Feb 16 15:22:10 crc kubenswrapper[4835]: I0216 15:22:10.611943 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="452942eb-5cd7-495f-88e4-9f5a272569e3" containerName="pull" Feb 16 15:22:10 crc kubenswrapper[4835]: E0216 15:22:10.611959 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452942eb-5cd7-495f-88e4-9f5a272569e3" containerName="extract" Feb 16 15:22:10 crc kubenswrapper[4835]: I0216 15:22:10.611967 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="452942eb-5cd7-495f-88e4-9f5a272569e3" containerName="extract" Feb 16 15:22:10 crc kubenswrapper[4835]: E0216 15:22:10.611977 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452942eb-5cd7-495f-88e4-9f5a272569e3" containerName="util" Feb 16 15:22:10 crc kubenswrapper[4835]: I0216 15:22:10.611986 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="452942eb-5cd7-495f-88e4-9f5a272569e3" containerName="util" Feb 16 15:22:10 crc kubenswrapper[4835]: I0216 15:22:10.612127 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="452942eb-5cd7-495f-88e4-9f5a272569e3" containerName="extract" Feb 16 15:22:10 crc kubenswrapper[4835]: I0216 15:22:10.612654 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5857b4c744-wcnkx" Feb 16 15:22:10 crc kubenswrapper[4835]: I0216 15:22:10.615789 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-tgjcw" Feb 16 15:22:10 crc kubenswrapper[4835]: I0216 15:22:10.633822 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5857b4c744-wcnkx"] Feb 16 15:22:10 crc kubenswrapper[4835]: I0216 15:22:10.678576 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d99cc\" (UniqueName: \"kubernetes.io/projected/ab3e43ea-05f5-400b-ba5e-95d106b9697a-kube-api-access-d99cc\") pod \"openstack-operator-controller-init-5857b4c744-wcnkx\" (UID: \"ab3e43ea-05f5-400b-ba5e-95d106b9697a\") " pod="openstack-operators/openstack-operator-controller-init-5857b4c744-wcnkx" Feb 16 15:22:10 crc kubenswrapper[4835]: I0216 15:22:10.779769 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d99cc\" (UniqueName: \"kubernetes.io/projected/ab3e43ea-05f5-400b-ba5e-95d106b9697a-kube-api-access-d99cc\") pod \"openstack-operator-controller-init-5857b4c744-wcnkx\" (UID: \"ab3e43ea-05f5-400b-ba5e-95d106b9697a\") " pod="openstack-operators/openstack-operator-controller-init-5857b4c744-wcnkx" Feb 16 15:22:10 crc kubenswrapper[4835]: I0216 15:22:10.801669 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d99cc\" (UniqueName: \"kubernetes.io/projected/ab3e43ea-05f5-400b-ba5e-95d106b9697a-kube-api-access-d99cc\") pod \"openstack-operator-controller-init-5857b4c744-wcnkx\" (UID: \"ab3e43ea-05f5-400b-ba5e-95d106b9697a\") " pod="openstack-operators/openstack-operator-controller-init-5857b4c744-wcnkx" Feb 16 15:22:10 crc kubenswrapper[4835]: I0216 15:22:10.931803 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5857b4c744-wcnkx" Feb 16 15:22:11 crc kubenswrapper[4835]: I0216 15:22:11.393900 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5857b4c744-wcnkx"] Feb 16 15:22:11 crc kubenswrapper[4835]: I0216 15:22:11.397024 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:22:12 crc kubenswrapper[4835]: I0216 15:22:12.161580 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5857b4c744-wcnkx" event={"ID":"ab3e43ea-05f5-400b-ba5e-95d106b9697a","Type":"ContainerStarted","Data":"2fffdf878fdbbca47259f8007957215d4bd31c4122ad3ba70fa73b726ea3aa14"} Feb 16 15:22:15 crc kubenswrapper[4835]: I0216 15:22:15.182202 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5857b4c744-wcnkx" event={"ID":"ab3e43ea-05f5-400b-ba5e-95d106b9697a","Type":"ContainerStarted","Data":"7c4572d501ecd49458eb39521f9c682726d9093bcadedc89217ccf92d481541d"} Feb 16 15:22:15 crc kubenswrapper[4835]: I0216 15:22:15.184104 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5857b4c744-wcnkx" Feb 16 15:22:15 crc kubenswrapper[4835]: I0216 15:22:15.217085 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5857b4c744-wcnkx" podStartSLOduration=1.672561266 podStartE2EDuration="5.217069973s" podCreationTimestamp="2026-02-16 15:22:10 +0000 UTC" firstStartedPulling="2026-02-16 15:22:11.396824512 +0000 UTC m=+880.688817407" lastFinishedPulling="2026-02-16 15:22:14.941333209 +0000 UTC m=+884.233326114" observedRunningTime="2026-02-16 15:22:15.21107356 +0000 UTC m=+884.503066455" watchObservedRunningTime="2026-02-16 15:22:15.217069973 +0000 UTC m=+884.509062868" Feb 16 15:22:18 crc kubenswrapper[4835]: I0216 15:22:18.586903 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:22:18 crc kubenswrapper[4835]: I0216 15:22:18.587395 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:22:20 crc kubenswrapper[4835]: I0216 15:22:20.934644 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5857b4c744-wcnkx" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.203381 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cljs5"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.204609 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cljs5" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.208160 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-2mc7c" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.213154 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cljs5"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.225362 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-kwlf2"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.226417 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kwlf2" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.228067 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-mmjd7" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.246432 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-r658s"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.247469 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-r658s" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.251154 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-58b2x" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.254749 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-kwlf2"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.273054 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-r658s"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.277762 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-dztnx"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.278787 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-dztnx" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.280763 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bff2f\" (UniqueName: \"kubernetes.io/projected/c9835f90-5158-4aa7-9cc5-4d3a1e1feb63-kube-api-access-bff2f\") pod \"designate-operator-controller-manager-6d8bf5c495-r658s\" (UID: \"c9835f90-5158-4aa7-9cc5-4d3a1e1feb63\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-r658s" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.280848 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnrff\" (UniqueName: \"kubernetes.io/projected/a7d34cbf-f94d-4170-9937-b0d05d9785e2-kube-api-access-cnrff\") pod \"cinder-operator-controller-manager-5d946d989d-kwlf2\" (UID: \"a7d34cbf-f94d-4170-9937-b0d05d9785e2\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kwlf2" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.280872 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f7ts\" (UniqueName: \"kubernetes.io/projected/fa7014ac-eeb8-4e1e-a3d3-e852d2b6c765-kube-api-access-8f7ts\") pod \"barbican-operator-controller-manager-868647ff47-cljs5\" (UID: \"fa7014ac-eeb8-4e1e-a3d3-e852d2b6c765\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cljs5" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.285259 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-zwf6h"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.286115 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-zwf6h" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.321956 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-q2hmt" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.322218 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-82f5w" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.356313 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-dztnx"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.375612 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-zwf6h"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.414419 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnrff\" (UniqueName: \"kubernetes.io/projected/a7d34cbf-f94d-4170-9937-b0d05d9785e2-kube-api-access-cnrff\") pod \"cinder-operator-controller-manager-5d946d989d-kwlf2\" (UID: \"a7d34cbf-f94d-4170-9937-b0d05d9785e2\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kwlf2" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.414478 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f7ts\" (UniqueName: \"kubernetes.io/projected/fa7014ac-eeb8-4e1e-a3d3-e852d2b6c765-kube-api-access-8f7ts\") pod \"barbican-operator-controller-manager-868647ff47-cljs5\" (UID: \"fa7014ac-eeb8-4e1e-a3d3-e852d2b6c765\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cljs5" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.414599 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sscgj\" (UniqueName: \"kubernetes.io/projected/cd439265-4f6d-48db-bf6e-3353288aff58-kube-api-access-sscgj\") pod \"heat-operator-controller-manager-69f49c598c-zwf6h\" (UID: \"cd439265-4f6d-48db-bf6e-3353288aff58\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-zwf6h" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.414627 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdk7m\" (UniqueName: \"kubernetes.io/projected/79d4f74b-f0e3-4a7e-b862-0e8f9d52e69b-kube-api-access-rdk7m\") pod \"glance-operator-controller-manager-77987464f4-dztnx\" (UID: \"79d4f74b-f0e3-4a7e-b862-0e8f9d52e69b\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-dztnx" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.414693 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bff2f\" (UniqueName: \"kubernetes.io/projected/c9835f90-5158-4aa7-9cc5-4d3a1e1feb63-kube-api-access-bff2f\") pod \"designate-operator-controller-manager-6d8bf5c495-r658s\" (UID: \"c9835f90-5158-4aa7-9cc5-4d3a1e1feb63\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-r658s" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.445602 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-p7bhp"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.446684 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-p7bhp" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.447411 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f7ts\" (UniqueName: \"kubernetes.io/projected/fa7014ac-eeb8-4e1e-a3d3-e852d2b6c765-kube-api-access-8f7ts\") pod \"barbican-operator-controller-manager-868647ff47-cljs5\" (UID: \"fa7014ac-eeb8-4e1e-a3d3-e852d2b6c765\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cljs5" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.450203 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-bv8r6" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.459891 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bff2f\" (UniqueName: \"kubernetes.io/projected/c9835f90-5158-4aa7-9cc5-4d3a1e1feb63-kube-api-access-bff2f\") pod \"designate-operator-controller-manager-6d8bf5c495-r658s\" (UID: \"c9835f90-5158-4aa7-9cc5-4d3a1e1feb63\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-r658s" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.466624 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-p7bhp"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.473036 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.473916 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.480842 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.481113 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-f5x5s" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.482883 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.489224 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnrff\" (UniqueName: \"kubernetes.io/projected/a7d34cbf-f94d-4170-9937-b0d05d9785e2-kube-api-access-cnrff\") pod \"cinder-operator-controller-manager-5d946d989d-kwlf2\" (UID: \"a7d34cbf-f94d-4170-9937-b0d05d9785e2\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kwlf2" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.493176 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-724ns"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.494136 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-724ns" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.502953 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-fvmrl" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.506664 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-724ns"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.512684 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-xzw5m"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.514105 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xzw5m" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.516168 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9jjx\" (UniqueName: \"kubernetes.io/projected/68f68635-db35-44a5-8256-cea92b856a61-kube-api-access-n9jjx\") pod \"infra-operator-controller-manager-79d975b745-w9l9w\" (UID: \"68f68635-db35-44a5-8256-cea92b856a61\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.516217 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr76g\" (UniqueName: \"kubernetes.io/projected/e9548890-1ce7-42a6-a870-ae0727b81a68-kube-api-access-cr76g\") pod \"horizon-operator-controller-manager-5b9b8895d5-p7bhp\" (UID: \"e9548890-1ce7-42a6-a870-ae0727b81a68\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-p7bhp" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.516248 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert\") pod \"infra-operator-controller-manager-79d975b745-w9l9w\" (UID: \"68f68635-db35-44a5-8256-cea92b856a61\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.516291 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sscgj\" (UniqueName: \"kubernetes.io/projected/cd439265-4f6d-48db-bf6e-3353288aff58-kube-api-access-sscgj\") pod \"heat-operator-controller-manager-69f49c598c-zwf6h\" (UID: \"cd439265-4f6d-48db-bf6e-3353288aff58\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-zwf6h" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.516310 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdk7m\" (UniqueName: \"kubernetes.io/projected/79d4f74b-f0e3-4a7e-b862-0e8f9d52e69b-kube-api-access-rdk7m\") pod \"glance-operator-controller-manager-77987464f4-dztnx\" (UID: \"79d4f74b-f0e3-4a7e-b862-0e8f9d52e69b\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-dztnx" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.517989 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-hm45w" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.535329 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-xzw5m"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.538266 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cljs5" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.560031 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kwlf2" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.562766 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-4g6lm"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.563282 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sscgj\" (UniqueName: \"kubernetes.io/projected/cd439265-4f6d-48db-bf6e-3353288aff58-kube-api-access-sscgj\") pod \"heat-operator-controller-manager-69f49c598c-zwf6h\" (UID: \"cd439265-4f6d-48db-bf6e-3353288aff58\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-zwf6h" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.563854 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4g6lm" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.567056 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-r658s" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.570989 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-ck4bg" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.574738 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdk7m\" (UniqueName: \"kubernetes.io/projected/79d4f74b-f0e3-4a7e-b862-0e8f9d52e69b-kube-api-access-rdk7m\") pod \"glance-operator-controller-manager-77987464f4-dztnx\" (UID: \"79d4f74b-f0e3-4a7e-b862-0e8f9d52e69b\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-dztnx" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.577592 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-4g6lm"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.585520 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-ztgc5"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.586895 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ztgc5" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.590908 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-shl4z" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.601587 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-ztgc5"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.614714 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-sf54k"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.615855 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-sf54k" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.617274 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws7qw\" (UniqueName: \"kubernetes.io/projected/4617e695-08f5-496c-92cc-496f6ce85441-kube-api-access-ws7qw\") pod \"manila-operator-controller-manager-54f6768c69-4g6lm\" (UID: \"4617e695-08f5-496c-92cc-496f6ce85441\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4g6lm" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.617345 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9jjx\" (UniqueName: \"kubernetes.io/projected/68f68635-db35-44a5-8256-cea92b856a61-kube-api-access-n9jjx\") pod \"infra-operator-controller-manager-79d975b745-w9l9w\" (UID: \"68f68635-db35-44a5-8256-cea92b856a61\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.617382 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nplj\" (UniqueName: \"kubernetes.io/projected/18703fe7-0c06-4977-9357-b9eff4ecdeba-kube-api-access-6nplj\") pod \"keystone-operator-controller-manager-b4d948c87-xzw5m\" (UID: \"18703fe7-0c06-4977-9357-b9eff4ecdeba\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xzw5m" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.617411 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr76g\" (UniqueName: \"kubernetes.io/projected/e9548890-1ce7-42a6-a870-ae0727b81a68-kube-api-access-cr76g\") pod \"horizon-operator-controller-manager-5b9b8895d5-p7bhp\" (UID: \"e9548890-1ce7-42a6-a870-ae0727b81a68\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-p7bhp" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.617437 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clbnn\" (UniqueName: \"kubernetes.io/projected/4155bc83-03aa-4e2d-a024-0967569539b4-kube-api-access-clbnn\") pod \"ironic-operator-controller-manager-554564d7fc-724ns\" (UID: \"4155bc83-03aa-4e2d-a024-0967569539b4\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-724ns" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.617463 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert\") pod \"infra-operator-controller-manager-79d975b745-w9l9w\" (UID: \"68f68635-db35-44a5-8256-cea92b856a61\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" Feb 16 15:22:40 crc kubenswrapper[4835]: E0216 15:22:40.617691 4835 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:22:40 crc kubenswrapper[4835]: E0216 15:22:40.617864 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert podName:68f68635-db35-44a5-8256-cea92b856a61 nodeName:}" failed. No retries permitted until 2026-02-16 15:22:41.117845595 +0000 UTC m=+910.409838490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert") pod "infra-operator-controller-manager-79d975b745-w9l9w" (UID: "68f68635-db35-44a5-8256-cea92b856a61") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.622741 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-sf54k"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.633860 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-czwml" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.650992 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-dztnx" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.665522 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9jjx\" (UniqueName: \"kubernetes.io/projected/68f68635-db35-44a5-8256-cea92b856a61-kube-api-access-n9jjx\") pod \"infra-operator-controller-manager-79d975b745-w9l9w\" (UID: \"68f68635-db35-44a5-8256-cea92b856a61\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.667193 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-v56sr"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.668772 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-v56sr" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.672164 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-zwf6h" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.698057 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr76g\" (UniqueName: \"kubernetes.io/projected/e9548890-1ce7-42a6-a870-ae0727b81a68-kube-api-access-cr76g\") pod \"horizon-operator-controller-manager-5b9b8895d5-p7bhp\" (UID: \"e9548890-1ce7-42a6-a870-ae0727b81a68\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-p7bhp" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.701919 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-4bhnk"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.708647 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-x59zx" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.712062 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4bhnk" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.718931 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxff2\" (UniqueName: \"kubernetes.io/projected/af7a1a80-d93a-4587-adaa-dda2c307e344-kube-api-access-xxff2\") pod \"mariadb-operator-controller-manager-6994f66f48-ztgc5\" (UID: \"af7a1a80-d93a-4587-adaa-dda2c307e344\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ztgc5" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.719000 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws7qw\" (UniqueName: \"kubernetes.io/projected/4617e695-08f5-496c-92cc-496f6ce85441-kube-api-access-ws7qw\") pod \"manila-operator-controller-manager-54f6768c69-4g6lm\" (UID: \"4617e695-08f5-496c-92cc-496f6ce85441\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4g6lm" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.719023 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kt5b\" (UniqueName: \"kubernetes.io/projected/f12b307e-157c-4aa1-91a5-2d55f2fa7def-kube-api-access-2kt5b\") pod \"nova-operator-controller-manager-567668f5cf-v56sr\" (UID: \"f12b307e-157c-4aa1-91a5-2d55f2fa7def\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-v56sr" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.719056 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nplj\" (UniqueName: \"kubernetes.io/projected/18703fe7-0c06-4977-9357-b9eff4ecdeba-kube-api-access-6nplj\") pod \"keystone-operator-controller-manager-b4d948c87-xzw5m\" (UID: \"18703fe7-0c06-4977-9357-b9eff4ecdeba\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xzw5m" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.719077 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clbnn\" (UniqueName: \"kubernetes.io/projected/4155bc83-03aa-4e2d-a024-0967569539b4-kube-api-access-clbnn\") pod \"ironic-operator-controller-manager-554564d7fc-724ns\" (UID: \"4155bc83-03aa-4e2d-a024-0967569539b4\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-724ns" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.719113 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn4nt\" (UniqueName: \"kubernetes.io/projected/b3a956a8-c3c7-4ca9-b8d5-902e89252e7c-kube-api-access-nn4nt\") pod \"neutron-operator-controller-manager-64ddbf8bb-sf54k\" (UID: \"b3a956a8-c3c7-4ca9-b8d5-902e89252e7c\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-sf54k" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.728472 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-v56sr"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.749935 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-rbxmz" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.758581 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-4bhnk"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.797139 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nplj\" (UniqueName: \"kubernetes.io/projected/18703fe7-0c06-4977-9357-b9eff4ecdeba-kube-api-access-6nplj\") pod \"keystone-operator-controller-manager-b4d948c87-xzw5m\" (UID: \"18703fe7-0c06-4977-9357-b9eff4ecdeba\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xzw5m" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.806612 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws7qw\" (UniqueName: \"kubernetes.io/projected/4617e695-08f5-496c-92cc-496f6ce85441-kube-api-access-ws7qw\") pod \"manila-operator-controller-manager-54f6768c69-4g6lm\" (UID: \"4617e695-08f5-496c-92cc-496f6ce85441\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4g6lm" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.812494 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clbnn\" (UniqueName: \"kubernetes.io/projected/4155bc83-03aa-4e2d-a024-0967569539b4-kube-api-access-clbnn\") pod \"ironic-operator-controller-manager-554564d7fc-724ns\" (UID: \"4155bc83-03aa-4e2d-a024-0967569539b4\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-724ns" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.822188 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kt5b\" (UniqueName: \"kubernetes.io/projected/f12b307e-157c-4aa1-91a5-2d55f2fa7def-kube-api-access-2kt5b\") pod \"nova-operator-controller-manager-567668f5cf-v56sr\" (UID: \"f12b307e-157c-4aa1-91a5-2d55f2fa7def\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-v56sr" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.822273 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn4nt\" (UniqueName: \"kubernetes.io/projected/b3a956a8-c3c7-4ca9-b8d5-902e89252e7c-kube-api-access-nn4nt\") pod \"neutron-operator-controller-manager-64ddbf8bb-sf54k\" (UID: \"b3a956a8-c3c7-4ca9-b8d5-902e89252e7c\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-sf54k" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.822321 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhk2n\" (UniqueName: \"kubernetes.io/projected/602f50b2-0fce-474b-a6ca-cbcdeaa8ff9e-kube-api-access-dhk2n\") pod \"octavia-operator-controller-manager-69f8888797-4bhnk\" (UID: \"602f50b2-0fce-474b-a6ca-cbcdeaa8ff9e\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4bhnk" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.822342 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxff2\" (UniqueName: \"kubernetes.io/projected/af7a1a80-d93a-4587-adaa-dda2c307e344-kube-api-access-xxff2\") pod \"mariadb-operator-controller-manager-6994f66f48-ztgc5\" (UID: \"af7a1a80-d93a-4587-adaa-dda2c307e344\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ztgc5" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.827748 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-52427"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.828643 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-52427" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.836324 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-p7bhp" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.839156 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-hk4kl" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.846909 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.847785 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.850499 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-hhqv5" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.850660 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.880376 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kt5b\" (UniqueName: \"kubernetes.io/projected/f12b307e-157c-4aa1-91a5-2d55f2fa7def-kube-api-access-2kt5b\") pod \"nova-operator-controller-manager-567668f5cf-v56sr\" (UID: \"f12b307e-157c-4aa1-91a5-2d55f2fa7def\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-v56sr" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.896297 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-52427"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.901598 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxff2\" (UniqueName: \"kubernetes.io/projected/af7a1a80-d93a-4587-adaa-dda2c307e344-kube-api-access-xxff2\") pod \"mariadb-operator-controller-manager-6994f66f48-ztgc5\" (UID: \"af7a1a80-d93a-4587-adaa-dda2c307e344\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ztgc5" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.906045 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn4nt\" (UniqueName: \"kubernetes.io/projected/b3a956a8-c3c7-4ca9-b8d5-902e89252e7c-kube-api-access-nn4nt\") pod \"neutron-operator-controller-manager-64ddbf8bb-sf54k\" (UID: \"b3a956a8-c3c7-4ca9-b8d5-902e89252e7c\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-sf54k" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.928172 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc55c\" (UniqueName: \"kubernetes.io/projected/2669aead-589f-4383-af0f-abea4a49f6fd-kube-api-access-rc55c\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr\" (UID: \"2669aead-589f-4383-af0f-abea4a49f6fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.928225 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhk2n\" (UniqueName: \"kubernetes.io/projected/602f50b2-0fce-474b-a6ca-cbcdeaa8ff9e-kube-api-access-dhk2n\") pod \"octavia-operator-controller-manager-69f8888797-4bhnk\" (UID: \"602f50b2-0fce-474b-a6ca-cbcdeaa8ff9e\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4bhnk" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.928286 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr\" (UID: \"2669aead-589f-4383-af0f-abea4a49f6fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.928310 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrzph\" (UniqueName: \"kubernetes.io/projected/430b904f-ff3b-4b49-a212-7affb09621ef-kube-api-access-mrzph\") pod \"ovn-operator-controller-manager-d44cf6b75-52427\" (UID: \"430b904f-ff3b-4b49-a212-7affb09621ef\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-52427" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.937222 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-g5cw4"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.938033 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5cw4" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.949876 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-njzx8" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.961430 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-n52fd"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.973911 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.962847 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-724ns" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.974140 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-n52fd" Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.987396 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-n52fd"] Feb 16 15:22:40 crc kubenswrapper[4835]: I0216 15:22:40.988497 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-6h5wv" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.011478 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4g6lm" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.012269 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xzw5m" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.028887 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ztgc5" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.032321 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc55c\" (UniqueName: \"kubernetes.io/projected/2669aead-589f-4383-af0f-abea4a49f6fd-kube-api-access-rc55c\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr\" (UID: \"2669aead-589f-4383-af0f-abea4a49f6fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.032468 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nz56\" (UniqueName: \"kubernetes.io/projected/e8ca5fe8-55ed-40a0-987e-59face1c1a19-kube-api-access-2nz56\") pod \"swift-operator-controller-manager-68f46476f-n52fd\" (UID: \"e8ca5fe8-55ed-40a0-987e-59face1c1a19\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-n52fd" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.032562 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4rct\" (UniqueName: \"kubernetes.io/projected/40e93ce0-f7ad-4fc7-ab2d-5fcf556bd256-kube-api-access-v4rct\") pod \"placement-operator-controller-manager-8497b45c89-g5cw4\" (UID: \"40e93ce0-f7ad-4fc7-ab2d-5fcf556bd256\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5cw4" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.032756 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr\" (UID: \"2669aead-589f-4383-af0f-abea4a49f6fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.032798 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrzph\" (UniqueName: \"kubernetes.io/projected/430b904f-ff3b-4b49-a212-7affb09621ef-kube-api-access-mrzph\") pod \"ovn-operator-controller-manager-d44cf6b75-52427\" (UID: \"430b904f-ff3b-4b49-a212-7affb09621ef\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-52427" Feb 16 15:22:41 crc kubenswrapper[4835]: E0216 15:22:41.033541 4835 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:22:41 crc kubenswrapper[4835]: E0216 15:22:41.033592 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert podName:2669aead-589f-4383-af0f-abea4a49f6fd nodeName:}" failed. No retries permitted until 2026-02-16 15:22:41.53357088 +0000 UTC m=+910.825563775 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" (UID: "2669aead-589f-4383-af0f-abea4a49f6fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.038251 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-g5cw4"] Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.045177 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhk2n\" (UniqueName: \"kubernetes.io/projected/602f50b2-0fce-474b-a6ca-cbcdeaa8ff9e-kube-api-access-dhk2n\") pod \"octavia-operator-controller-manager-69f8888797-4bhnk\" (UID: \"602f50b2-0fce-474b-a6ca-cbcdeaa8ff9e\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4bhnk" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.072636 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5884f785c-68ssq"] Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.074597 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-68ssq" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.084179 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-sf54k" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.084337 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-qfbw2" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.091476 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc55c\" (UniqueName: \"kubernetes.io/projected/2669aead-589f-4383-af0f-abea4a49f6fd-kube-api-access-rc55c\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr\" (UID: \"2669aead-589f-4383-af0f-abea4a49f6fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.092711 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5884f785c-68ssq"] Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.112163 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrzph\" (UniqueName: \"kubernetes.io/projected/430b904f-ff3b-4b49-a212-7affb09621ef-kube-api-access-mrzph\") pod \"ovn-operator-controller-manager-d44cf6b75-52427\" (UID: \"430b904f-ff3b-4b49-a212-7affb09621ef\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-52427" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.118664 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-mx5lg"] Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.119472 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-v56sr" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.119856 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-mx5lg" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.134406 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-th6dd" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.135217 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert\") pod \"infra-operator-controller-manager-79d975b745-w9l9w\" (UID: \"68f68635-db35-44a5-8256-cea92b856a61\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.135290 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nz56\" (UniqueName: \"kubernetes.io/projected/e8ca5fe8-55ed-40a0-987e-59face1c1a19-kube-api-access-2nz56\") pod \"swift-operator-controller-manager-68f46476f-n52fd\" (UID: \"e8ca5fe8-55ed-40a0-987e-59face1c1a19\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-n52fd" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.135321 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zblfl\" (UniqueName: \"kubernetes.io/projected/91557fa8-9b53-43ea-b9bb-13117ee5d714-kube-api-access-zblfl\") pod \"telemetry-operator-controller-manager-5884f785c-68ssq\" (UID: \"91557fa8-9b53-43ea-b9bb-13117ee5d714\") " pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-68ssq" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.135357 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4rct\" (UniqueName: \"kubernetes.io/projected/40e93ce0-f7ad-4fc7-ab2d-5fcf556bd256-kube-api-access-v4rct\") pod \"placement-operator-controller-manager-8497b45c89-g5cw4\" (UID: \"40e93ce0-f7ad-4fc7-ab2d-5fcf556bd256\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5cw4" Feb 16 15:22:41 crc kubenswrapper[4835]: E0216 15:22:41.135631 4835 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:22:41 crc kubenswrapper[4835]: E0216 15:22:41.135674 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert podName:68f68635-db35-44a5-8256-cea92b856a61 nodeName:}" failed. No retries permitted until 2026-02-16 15:22:42.135660697 +0000 UTC m=+911.427653592 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert") pod "infra-operator-controller-manager-79d975b745-w9l9w" (UID: "68f68635-db35-44a5-8256-cea92b856a61") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.148588 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-2k2dp"] Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.149544 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2k2dp" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.152225 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-nhg59" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.152635 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4bhnk" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.159223 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4rct\" (UniqueName: \"kubernetes.io/projected/40e93ce0-f7ad-4fc7-ab2d-5fcf556bd256-kube-api-access-v4rct\") pod \"placement-operator-controller-manager-8497b45c89-g5cw4\" (UID: \"40e93ce0-f7ad-4fc7-ab2d-5fcf556bd256\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5cw4" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.185126 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nz56\" (UniqueName: \"kubernetes.io/projected/e8ca5fe8-55ed-40a0-987e-59face1c1a19-kube-api-access-2nz56\") pod \"swift-operator-controller-manager-68f46476f-n52fd\" (UID: \"e8ca5fe8-55ed-40a0-987e-59face1c1a19\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-n52fd" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.186456 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-52427" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.199188 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-n52fd" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.237660 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zblfl\" (UniqueName: \"kubernetes.io/projected/91557fa8-9b53-43ea-b9bb-13117ee5d714-kube-api-access-zblfl\") pod \"telemetry-operator-controller-manager-5884f785c-68ssq\" (UID: \"91557fa8-9b53-43ea-b9bb-13117ee5d714\") " pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-68ssq" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.237714 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh6cx\" (UniqueName: \"kubernetes.io/projected/76535411-ec0f-4b41-9d6e-084d72e4deec-kube-api-access-xh6cx\") pod \"watcher-operator-controller-manager-5db88f68c-2k2dp\" (UID: \"76535411-ec0f-4b41-9d6e-084d72e4deec\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2k2dp" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.237785 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfbwb\" (UniqueName: \"kubernetes.io/projected/24f969bf-0ff8-4e85-a388-bde2f6ad68bb-kube-api-access-jfbwb\") pod \"test-operator-controller-manager-7866795846-mx5lg\" (UID: \"24f969bf-0ff8-4e85-a388-bde2f6ad68bb\") " pod="openstack-operators/test-operator-controller-manager-7866795846-mx5lg" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.259131 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zblfl\" (UniqueName: \"kubernetes.io/projected/91557fa8-9b53-43ea-b9bb-13117ee5d714-kube-api-access-zblfl\") pod \"telemetry-operator-controller-manager-5884f785c-68ssq\" (UID: \"91557fa8-9b53-43ea-b9bb-13117ee5d714\") " pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-68ssq" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.259179 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-2k2dp"] Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.272438 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-mx5lg"] Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.305037 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx"] Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.306706 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.308932 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.309898 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.310442 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-t2shb" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.333183 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx"] Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.338754 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.338813 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfbwb\" (UniqueName: \"kubernetes.io/projected/24f969bf-0ff8-4e85-a388-bde2f6ad68bb-kube-api-access-jfbwb\") pod \"test-operator-controller-manager-7866795846-mx5lg\" (UID: \"24f969bf-0ff8-4e85-a388-bde2f6ad68bb\") " pod="openstack-operators/test-operator-controller-manager-7866795846-mx5lg" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.338841 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9thx\" (UniqueName: \"kubernetes.io/projected/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-kube-api-access-s9thx\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.339138 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh6cx\" (UniqueName: \"kubernetes.io/projected/76535411-ec0f-4b41-9d6e-084d72e4deec-kube-api-access-xh6cx\") pod \"watcher-operator-controller-manager-5db88f68c-2k2dp\" (UID: \"76535411-ec0f-4b41-9d6e-084d72e4deec\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2k2dp" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.339258 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.360846 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-29wr6"] Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.362719 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh6cx\" (UniqueName: \"kubernetes.io/projected/76535411-ec0f-4b41-9d6e-084d72e4deec-kube-api-access-xh6cx\") pod \"watcher-operator-controller-manager-5db88f68c-2k2dp\" (UID: \"76535411-ec0f-4b41-9d6e-084d72e4deec\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2k2dp" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.363546 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-29wr6" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.363523 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfbwb\" (UniqueName: \"kubernetes.io/projected/24f969bf-0ff8-4e85-a388-bde2f6ad68bb-kube-api-access-jfbwb\") pod \"test-operator-controller-manager-7866795846-mx5lg\" (UID: \"24f969bf-0ff8-4e85-a388-bde2f6ad68bb\") " pod="openstack-operators/test-operator-controller-manager-7866795846-mx5lg" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.370638 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-qfxjs" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.435916 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-29wr6"] Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.440806 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.440847 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.440872 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9thx\" (UniqueName: \"kubernetes.io/projected/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-kube-api-access-s9thx\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.440912 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjq8j\" (UniqueName: \"kubernetes.io/projected/e8e216ae-88d9-42e2-a387-b264904e7e20-kube-api-access-fjq8j\") pod \"rabbitmq-cluster-operator-manager-668c99d594-29wr6\" (UID: \"e8e216ae-88d9-42e2-a387-b264904e7e20\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-29wr6" Feb 16 15:22:41 crc kubenswrapper[4835]: E0216 15:22:41.441804 4835 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:22:41 crc kubenswrapper[4835]: E0216 15:22:41.441847 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs podName:1de323fb-2bae-44d3-a31a-07b0f2d9d53b nodeName:}" failed. No retries permitted until 2026-02-16 15:22:41.941834965 +0000 UTC m=+911.233827860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs") pod "openstack-operator-controller-manager-54dd757795-lpkxx" (UID: "1de323fb-2bae-44d3-a31a-07b0f2d9d53b") : secret "metrics-server-cert" not found Feb 16 15:22:41 crc kubenswrapper[4835]: E0216 15:22:41.441901 4835 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:22:41 crc kubenswrapper[4835]: E0216 15:22:41.441950 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs podName:1de323fb-2bae-44d3-a31a-07b0f2d9d53b nodeName:}" failed. No retries permitted until 2026-02-16 15:22:41.941934458 +0000 UTC m=+911.233927353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs") pod "openstack-operator-controller-manager-54dd757795-lpkxx" (UID: "1de323fb-2bae-44d3-a31a-07b0f2d9d53b") : secret "webhook-server-cert" not found Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.445207 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5cw4" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.476449 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9thx\" (UniqueName: \"kubernetes.io/projected/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-kube-api-access-s9thx\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.533599 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-68ssq" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.545867 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr\" (UID: \"2669aead-589f-4383-af0f-abea4a49f6fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.545957 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjq8j\" (UniqueName: \"kubernetes.io/projected/e8e216ae-88d9-42e2-a387-b264904e7e20-kube-api-access-fjq8j\") pod \"rabbitmq-cluster-operator-manager-668c99d594-29wr6\" (UID: \"e8e216ae-88d9-42e2-a387-b264904e7e20\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-29wr6" Feb 16 15:22:41 crc kubenswrapper[4835]: E0216 15:22:41.546185 4835 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:22:41 crc kubenswrapper[4835]: E0216 15:22:41.546263 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert podName:2669aead-589f-4383-af0f-abea4a49f6fd nodeName:}" failed. No retries permitted until 2026-02-16 15:22:42.546241451 +0000 UTC m=+911.838234336 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" (UID: "2669aead-589f-4383-af0f-abea4a49f6fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.562508 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjq8j\" (UniqueName: \"kubernetes.io/projected/e8e216ae-88d9-42e2-a387-b264904e7e20-kube-api-access-fjq8j\") pod \"rabbitmq-cluster-operator-manager-668c99d594-29wr6\" (UID: \"e8e216ae-88d9-42e2-a387-b264904e7e20\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-29wr6" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.610935 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-mx5lg" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.620892 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2k2dp" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.737541 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-29wr6" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.749148 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cljs5"] Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.935738 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-dztnx"] Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.943486 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-zwf6h"] Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.949298 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-r658s"] Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.961446 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.961484 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:41 crc kubenswrapper[4835]: E0216 15:22:41.961637 4835 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:22:41 crc kubenswrapper[4835]: E0216 15:22:41.961689 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs podName:1de323fb-2bae-44d3-a31a-07b0f2d9d53b nodeName:}" failed. No retries permitted until 2026-02-16 15:22:42.961672509 +0000 UTC m=+912.253665404 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs") pod "openstack-operator-controller-manager-54dd757795-lpkxx" (UID: "1de323fb-2bae-44d3-a31a-07b0f2d9d53b") : secret "webhook-server-cert" not found Feb 16 15:22:41 crc kubenswrapper[4835]: E0216 15:22:41.962040 4835 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:22:41 crc kubenswrapper[4835]: E0216 15:22:41.962067 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs podName:1de323fb-2bae-44d3-a31a-07b0f2d9d53b nodeName:}" failed. No retries permitted until 2026-02-16 15:22:42.962058679 +0000 UTC m=+912.254051574 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs") pod "openstack-operator-controller-manager-54dd757795-lpkxx" (UID: "1de323fb-2bae-44d3-a31a-07b0f2d9d53b") : secret "metrics-server-cert" not found Feb 16 15:22:41 crc kubenswrapper[4835]: I0216 15:22:41.979073 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-kwlf2"] Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.121651 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-xzw5m"] Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.136450 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-4g6lm"] Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.148883 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-724ns"] Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.153993 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-v56sr"] Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.164967 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert\") pod \"infra-operator-controller-manager-79d975b745-w9l9w\" (UID: \"68f68635-db35-44a5-8256-cea92b856a61\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.165120 4835 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.165228 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert podName:68f68635-db35-44a5-8256-cea92b856a61 nodeName:}" failed. No retries permitted until 2026-02-16 15:22:44.165211167 +0000 UTC m=+913.457204062 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert") pod "infra-operator-controller-manager-79d975b745-w9l9w" (UID: "68f68635-db35-44a5-8256-cea92b856a61") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.302779 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-52427"] Feb 16 15:22:42 crc kubenswrapper[4835]: W0216 15:22:42.320502 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod430b904f_ff3b_4b49_a212_7affb09621ef.slice/crio-b33d120340b4fdf766945f49b1835f2c35f7a3027224f3d88aab2174bb7ccddf WatchSource:0}: Error finding container b33d120340b4fdf766945f49b1835f2c35f7a3027224f3d88aab2174bb7ccddf: Status 404 returned error can't find the container with id b33d120340b4fdf766945f49b1835f2c35f7a3027224f3d88aab2174bb7ccddf Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.323456 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-sf54k"] Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.337352 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-ztgc5"] Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.355221 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5884f785c-68ssq"] Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.369745 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-4bhnk"] Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.378877 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-n52fd"] Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.391935 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-p7bhp"] Feb 16 15:22:42 crc kubenswrapper[4835]: W0216 15:22:42.392697 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8e216ae_88d9_42e2_a387_b264904e7e20.slice/crio-cc0b1fae46850b98319d98fbd23844cf6063d88598756b6ae3ef001b97e59442 WatchSource:0}: Error finding container cc0b1fae46850b98319d98fbd23844cf6063d88598756b6ae3ef001b97e59442: Status 404 returned error can't find the container with id cc0b1fae46850b98319d98fbd23844cf6063d88598756b6ae3ef001b97e59442 Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.392927 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cr76g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-p7bhp_openstack-operators(e9548890-1ce7-42a6-a870-ae0727b81a68): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.393059 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.64:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zblfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5884f785c-68ssq_openstack-operators(91557fa8-9b53-43ea-b9bb-13117ee5d714): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.394407 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-68ssq" podUID="91557fa8-9b53-43ea-b9bb-13117ee5d714" Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.394584 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-p7bhp" podUID="e9548890-1ce7-42a6-a870-ae0727b81a68" Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.396383 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fjq8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-29wr6_openstack-operators(e8e216ae-88d9-42e2-a387-b264904e7e20): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:22:42 crc kubenswrapper[4835]: W0216 15:22:42.396991 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24f969bf_0ff8_4e85_a388_bde2f6ad68bb.slice/crio-0715b00e7ba8d157cddad83feb53ff6e0c2bcf34546dc439cbc8ea5f280fc2e3 WatchSource:0}: Error finding container 0715b00e7ba8d157cddad83feb53ff6e0c2bcf34546dc439cbc8ea5f280fc2e3: Status 404 returned error can't find the container with id 0715b00e7ba8d157cddad83feb53ff6e0c2bcf34546dc439cbc8ea5f280fc2e3 Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.398069 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-29wr6" podUID="e8e216ae-88d9-42e2-a387-b264904e7e20" Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.399337 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-29wr6"] Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.405205 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dhk2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-4bhnk_openstack-operators(602f50b2-0fce-474b-a6ca-cbcdeaa8ff9e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.405876 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xh6cx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-2k2dp_openstack-operators(76535411-ec0f-4b41-9d6e-084d72e4deec): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.406396 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4bhnk" podUID="602f50b2-0fce-474b-a6ca-cbcdeaa8ff9e" Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.407689 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2k2dp" podUID="76535411-ec0f-4b41-9d6e-084d72e4deec" Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.410145 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2nz56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-n52fd_openstack-operators(e8ca5fe8-55ed-40a0-987e-59face1c1a19): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.411741 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-n52fd" podUID="e8ca5fe8-55ed-40a0-987e-59face1c1a19" Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.414288 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-mx5lg"] Feb 16 15:22:42 crc kubenswrapper[4835]: W0216 15:22:42.417519 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40e93ce0_f7ad_4fc7_ab2d_5fcf556bd256.slice/crio-e2b9a3dddbc5ad0426b79d11f12dbb825b183e8f8f6f16af537f204defadaff7 WatchSource:0}: Error finding container e2b9a3dddbc5ad0426b79d11f12dbb825b183e8f8f6f16af537f204defadaff7: Status 404 returned error can't find the container with id e2b9a3dddbc5ad0426b79d11f12dbb825b183e8f8f6f16af537f204defadaff7 Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.420964 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4rct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-g5cw4_openstack-operators(40e93ce0-f7ad-4fc7-ab2d-5fcf556bd256): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.421562 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-2k2dp"] Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.422154 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5cw4" podUID="40e93ce0-f7ad-4fc7-ab2d-5fcf556bd256" Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.430691 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-g5cw4"] Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.434089 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-dztnx" event={"ID":"79d4f74b-f0e3-4a7e-b862-0e8f9d52e69b","Type":"ContainerStarted","Data":"320c29587abfd0a2be1f70732b63b195611b7ae9ec5f0eae0e27fe9a73d03419"} Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.435034 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cljs5" event={"ID":"fa7014ac-eeb8-4e1e-a3d3-e852d2b6c765","Type":"ContainerStarted","Data":"46b9dfc4598e7c3c3520505d43cbb0fedd517e4be3d60f0c456e15ad72bc26bd"} Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.436681 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-mx5lg" event={"ID":"24f969bf-0ff8-4e85-a388-bde2f6ad68bb","Type":"ContainerStarted","Data":"0715b00e7ba8d157cddad83feb53ff6e0c2bcf34546dc439cbc8ea5f280fc2e3"} Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.437363 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-724ns" event={"ID":"4155bc83-03aa-4e2d-a024-0967569539b4","Type":"ContainerStarted","Data":"c6a9b944674e4010c2e1fdba189c6fba2a7f74552304a362d1e08e3ba270ab01"} Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.438216 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4g6lm" event={"ID":"4617e695-08f5-496c-92cc-496f6ce85441","Type":"ContainerStarted","Data":"c0e112b35424707b8628d9d6ad214b8b279f5552e0cf3a503362b3e9d9518535"} Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.439682 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-zwf6h" event={"ID":"cd439265-4f6d-48db-bf6e-3353288aff58","Type":"ContainerStarted","Data":"eab6d8cdad3f95af4d4de7b449f3f8d06b6558acfbaa30bfebebcebb36b4a4f8"} Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.440601 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-68ssq" event={"ID":"91557fa8-9b53-43ea-b9bb-13117ee5d714","Type":"ContainerStarted","Data":"59febdac46471a8cfc8d5172cf66e57593fe8645235768f2d0d0bc73bd558a7d"} Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.441721 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.64:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-68ssq" podUID="91557fa8-9b53-43ea-b9bb-13117ee5d714" Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.441866 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-sf54k" event={"ID":"b3a956a8-c3c7-4ca9-b8d5-902e89252e7c","Type":"ContainerStarted","Data":"293e122d86a6b737338a389e085cc12efe5ba514642328e8ca752613a5ce8d6b"} Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.442828 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-v56sr" event={"ID":"f12b307e-157c-4aa1-91a5-2d55f2fa7def","Type":"ContainerStarted","Data":"fcf7df15bf79d9974443fc7338433f6a1da199e881f2870512f815fc92080394"} Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.445024 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4bhnk" event={"ID":"602f50b2-0fce-474b-a6ca-cbcdeaa8ff9e","Type":"ContainerStarted","Data":"798ad3f69980125aa6cc214f0c9759cf6f93c601c7b527df8c93d39bbac48cb1"} Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.450895 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4bhnk" podUID="602f50b2-0fce-474b-a6ca-cbcdeaa8ff9e" Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.452586 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5cw4" event={"ID":"40e93ce0-f7ad-4fc7-ab2d-5fcf556bd256","Type":"ContainerStarted","Data":"e2b9a3dddbc5ad0426b79d11f12dbb825b183e8f8f6f16af537f204defadaff7"} Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.455581 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5cw4" podUID="40e93ce0-f7ad-4fc7-ab2d-5fcf556bd256" Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.459834 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-52427" event={"ID":"430b904f-ff3b-4b49-a212-7affb09621ef","Type":"ContainerStarted","Data":"b33d120340b4fdf766945f49b1835f2c35f7a3027224f3d88aab2174bb7ccddf"} Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.464602 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-p7bhp" event={"ID":"e9548890-1ce7-42a6-a870-ae0727b81a68","Type":"ContainerStarted","Data":"905676c814c9b103225966d27f2609d06fcaf9ef32a8ac284d572c64c6bf50cc"} Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.469777 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-p7bhp" podUID="e9548890-1ce7-42a6-a870-ae0727b81a68" Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.470226 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-n52fd" event={"ID":"e8ca5fe8-55ed-40a0-987e-59face1c1a19","Type":"ContainerStarted","Data":"9e13ebd6c5870c5cbf2c28970ad076970fcd05be577b5bdc76aef12d69c3bb5e"} Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.471158 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-n52fd" podUID="e8ca5fe8-55ed-40a0-987e-59face1c1a19" Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.472052 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xzw5m" event={"ID":"18703fe7-0c06-4977-9357-b9eff4ecdeba","Type":"ContainerStarted","Data":"29ccf193d1aafbaa83092b2293197ed328c8561e95d507d8142a087c722634d7"} Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.473473 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-29wr6" event={"ID":"e8e216ae-88d9-42e2-a387-b264904e7e20","Type":"ContainerStarted","Data":"cc0b1fae46850b98319d98fbd23844cf6063d88598756b6ae3ef001b97e59442"} Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.474775 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-29wr6" podUID="e8e216ae-88d9-42e2-a387-b264904e7e20" Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.476075 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ztgc5" event={"ID":"af7a1a80-d93a-4587-adaa-dda2c307e344","Type":"ContainerStarted","Data":"796346d212c2d3250699d84e8f04981447f661539b937538be45121a4d35d57b"} Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.501414 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kwlf2" event={"ID":"a7d34cbf-f94d-4170-9937-b0d05d9785e2","Type":"ContainerStarted","Data":"ed347debb27a9a3c0ff657dd86f36ddba904e68f489ff75ae848625cc21358c9"} Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.510013 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-r658s" event={"ID":"c9835f90-5158-4aa7-9cc5-4d3a1e1feb63","Type":"ContainerStarted","Data":"1bbd0037e49c12775425a13111b9aae4c542b5d9d4a4439fbeea469b06961714"} Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.511363 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2k2dp" event={"ID":"76535411-ec0f-4b41-9d6e-084d72e4deec","Type":"ContainerStarted","Data":"4d9eb2408dba0651c4277a9386509d822464c9791183dd09e8342fb0ac5065a9"} Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.513360 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2k2dp" podUID="76535411-ec0f-4b41-9d6e-084d72e4deec" Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.573503 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr\" (UID: \"2669aead-589f-4383-af0f-abea4a49f6fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.573677 4835 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.574013 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert podName:2669aead-589f-4383-af0f-abea4a49f6fd nodeName:}" failed. No retries permitted until 2026-02-16 15:22:44.573996176 +0000 UTC m=+913.865989071 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" (UID: "2669aead-589f-4383-af0f-abea4a49f6fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.979550 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:42 crc kubenswrapper[4835]: I0216 15:22:42.979889 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.980064 4835 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.980110 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs podName:1de323fb-2bae-44d3-a31a-07b0f2d9d53b nodeName:}" failed. No retries permitted until 2026-02-16 15:22:44.980094996 +0000 UTC m=+914.272087891 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs") pod "openstack-operator-controller-manager-54dd757795-lpkxx" (UID: "1de323fb-2bae-44d3-a31a-07b0f2d9d53b") : secret "webhook-server-cert" not found Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.980409 4835 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:22:42 crc kubenswrapper[4835]: E0216 15:22:42.980443 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs podName:1de323fb-2bae-44d3-a31a-07b0f2d9d53b nodeName:}" failed. No retries permitted until 2026-02-16 15:22:44.980435835 +0000 UTC m=+914.272428730 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs") pod "openstack-operator-controller-manager-54dd757795-lpkxx" (UID: "1de323fb-2bae-44d3-a31a-07b0f2d9d53b") : secret "metrics-server-cert" not found Feb 16 15:22:43 crc kubenswrapper[4835]: E0216 15:22:43.538667 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2k2dp" podUID="76535411-ec0f-4b41-9d6e-084d72e4deec" Feb 16 15:22:43 crc kubenswrapper[4835]: E0216 15:22:43.540127 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4bhnk" podUID="602f50b2-0fce-474b-a6ca-cbcdeaa8ff9e" Feb 16 15:22:43 crc kubenswrapper[4835]: E0216 15:22:43.540771 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-n52fd" podUID="e8ca5fe8-55ed-40a0-987e-59face1c1a19" Feb 16 15:22:43 crc kubenswrapper[4835]: E0216 15:22:43.542390 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.64:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-68ssq" podUID="91557fa8-9b53-43ea-b9bb-13117ee5d714" Feb 16 15:22:43 crc kubenswrapper[4835]: E0216 15:22:43.542432 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5cw4" podUID="40e93ce0-f7ad-4fc7-ab2d-5fcf556bd256" Feb 16 15:22:43 crc kubenswrapper[4835]: E0216 15:22:43.542448 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-p7bhp" podUID="e9548890-1ce7-42a6-a870-ae0727b81a68" Feb 16 15:22:43 crc kubenswrapper[4835]: E0216 15:22:43.544105 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-29wr6" podUID="e8e216ae-88d9-42e2-a387-b264904e7e20" Feb 16 15:22:44 crc kubenswrapper[4835]: I0216 15:22:44.199710 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert\") pod \"infra-operator-controller-manager-79d975b745-w9l9w\" (UID: \"68f68635-db35-44a5-8256-cea92b856a61\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" Feb 16 15:22:44 crc kubenswrapper[4835]: E0216 15:22:44.199921 4835 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:22:44 crc kubenswrapper[4835]: E0216 15:22:44.200013 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert podName:68f68635-db35-44a5-8256-cea92b856a61 nodeName:}" failed. No retries permitted until 2026-02-16 15:22:48.199993079 +0000 UTC m=+917.491985974 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert") pod "infra-operator-controller-manager-79d975b745-w9l9w" (UID: "68f68635-db35-44a5-8256-cea92b856a61") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:22:44 crc kubenswrapper[4835]: I0216 15:22:44.605175 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr\" (UID: \"2669aead-589f-4383-af0f-abea4a49f6fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" Feb 16 15:22:44 crc kubenswrapper[4835]: E0216 15:22:44.605328 4835 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:22:44 crc kubenswrapper[4835]: E0216 15:22:44.605373 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert podName:2669aead-589f-4383-af0f-abea4a49f6fd nodeName:}" failed. No retries permitted until 2026-02-16 15:22:48.60536012 +0000 UTC m=+917.897353015 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" (UID: "2669aead-589f-4383-af0f-abea4a49f6fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:22:45 crc kubenswrapper[4835]: I0216 15:22:45.011330 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:45 crc kubenswrapper[4835]: I0216 15:22:45.011379 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:45 crc kubenswrapper[4835]: E0216 15:22:45.011602 4835 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:22:45 crc kubenswrapper[4835]: E0216 15:22:45.011649 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs podName:1de323fb-2bae-44d3-a31a-07b0f2d9d53b nodeName:}" failed. No retries permitted until 2026-02-16 15:22:49.011635995 +0000 UTC m=+918.303628890 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs") pod "openstack-operator-controller-manager-54dd757795-lpkxx" (UID: "1de323fb-2bae-44d3-a31a-07b0f2d9d53b") : secret "webhook-server-cert" not found Feb 16 15:22:45 crc kubenswrapper[4835]: E0216 15:22:45.011965 4835 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:22:45 crc kubenswrapper[4835]: E0216 15:22:45.011997 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs podName:1de323fb-2bae-44d3-a31a-07b0f2d9d53b nodeName:}" failed. No retries permitted until 2026-02-16 15:22:49.011989964 +0000 UTC m=+918.303982859 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs") pod "openstack-operator-controller-manager-54dd757795-lpkxx" (UID: "1de323fb-2bae-44d3-a31a-07b0f2d9d53b") : secret "metrics-server-cert" not found Feb 16 15:22:45 crc kubenswrapper[4835]: I0216 15:22:45.897856 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6xdbr"] Feb 16 15:22:45 crc kubenswrapper[4835]: I0216 15:22:45.899185 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xdbr" Feb 16 15:22:45 crc kubenswrapper[4835]: I0216 15:22:45.908343 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6xdbr"] Feb 16 15:22:45 crc kubenswrapper[4835]: I0216 15:22:45.925476 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff8ca48-8641-46ab-9353-7a1d0c649acd-catalog-content\") pod \"community-operators-6xdbr\" (UID: \"3ff8ca48-8641-46ab-9353-7a1d0c649acd\") " pod="openshift-marketplace/community-operators-6xdbr" Feb 16 15:22:45 crc kubenswrapper[4835]: I0216 15:22:45.925611 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff8ca48-8641-46ab-9353-7a1d0c649acd-utilities\") pod \"community-operators-6xdbr\" (UID: \"3ff8ca48-8641-46ab-9353-7a1d0c649acd\") " pod="openshift-marketplace/community-operators-6xdbr" Feb 16 15:22:45 crc kubenswrapper[4835]: I0216 15:22:45.925633 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5b4q\" (UniqueName: \"kubernetes.io/projected/3ff8ca48-8641-46ab-9353-7a1d0c649acd-kube-api-access-z5b4q\") pod \"community-operators-6xdbr\" (UID: \"3ff8ca48-8641-46ab-9353-7a1d0c649acd\") " pod="openshift-marketplace/community-operators-6xdbr" Feb 16 15:22:46 crc kubenswrapper[4835]: I0216 15:22:46.026159 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff8ca48-8641-46ab-9353-7a1d0c649acd-catalog-content\") pod \"community-operators-6xdbr\" (UID: \"3ff8ca48-8641-46ab-9353-7a1d0c649acd\") " pod="openshift-marketplace/community-operators-6xdbr" Feb 16 15:22:46 crc kubenswrapper[4835]: I0216 15:22:46.026284 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff8ca48-8641-46ab-9353-7a1d0c649acd-utilities\") pod \"community-operators-6xdbr\" (UID: \"3ff8ca48-8641-46ab-9353-7a1d0c649acd\") " pod="openshift-marketplace/community-operators-6xdbr" Feb 16 15:22:46 crc kubenswrapper[4835]: I0216 15:22:46.026301 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5b4q\" (UniqueName: \"kubernetes.io/projected/3ff8ca48-8641-46ab-9353-7a1d0c649acd-kube-api-access-z5b4q\") pod \"community-operators-6xdbr\" (UID: \"3ff8ca48-8641-46ab-9353-7a1d0c649acd\") " pod="openshift-marketplace/community-operators-6xdbr" Feb 16 15:22:46 crc kubenswrapper[4835]: I0216 15:22:46.026869 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff8ca48-8641-46ab-9353-7a1d0c649acd-utilities\") pod \"community-operators-6xdbr\" (UID: \"3ff8ca48-8641-46ab-9353-7a1d0c649acd\") " pod="openshift-marketplace/community-operators-6xdbr" Feb 16 15:22:46 crc kubenswrapper[4835]: I0216 15:22:46.029857 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff8ca48-8641-46ab-9353-7a1d0c649acd-catalog-content\") pod \"community-operators-6xdbr\" (UID: \"3ff8ca48-8641-46ab-9353-7a1d0c649acd\") " pod="openshift-marketplace/community-operators-6xdbr" Feb 16 15:22:46 crc kubenswrapper[4835]: I0216 15:22:46.059264 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5b4q\" (UniqueName: \"kubernetes.io/projected/3ff8ca48-8641-46ab-9353-7a1d0c649acd-kube-api-access-z5b4q\") pod \"community-operators-6xdbr\" (UID: \"3ff8ca48-8641-46ab-9353-7a1d0c649acd\") " pod="openshift-marketplace/community-operators-6xdbr" Feb 16 15:22:46 crc kubenswrapper[4835]: I0216 15:22:46.235913 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xdbr" Feb 16 15:22:48 crc kubenswrapper[4835]: I0216 15:22:48.258806 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert\") pod \"infra-operator-controller-manager-79d975b745-w9l9w\" (UID: \"68f68635-db35-44a5-8256-cea92b856a61\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" Feb 16 15:22:48 crc kubenswrapper[4835]: E0216 15:22:48.258958 4835 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:22:48 crc kubenswrapper[4835]: E0216 15:22:48.259128 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert podName:68f68635-db35-44a5-8256-cea92b856a61 nodeName:}" failed. No retries permitted until 2026-02-16 15:22:56.259112196 +0000 UTC m=+925.551105091 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert") pod "infra-operator-controller-manager-79d975b745-w9l9w" (UID: "68f68635-db35-44a5-8256-cea92b856a61") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:22:48 crc kubenswrapper[4835]: I0216 15:22:48.590026 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:22:48 crc kubenswrapper[4835]: I0216 15:22:48.590391 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:22:48 crc kubenswrapper[4835]: I0216 15:22:48.664621 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr\" (UID: \"2669aead-589f-4383-af0f-abea4a49f6fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" Feb 16 15:22:48 crc kubenswrapper[4835]: E0216 15:22:48.664821 4835 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:22:48 crc kubenswrapper[4835]: E0216 15:22:48.664876 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert podName:2669aead-589f-4383-af0f-abea4a49f6fd nodeName:}" failed. No retries permitted until 2026-02-16 15:22:56.664862026 +0000 UTC m=+925.956854921 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" (UID: "2669aead-589f-4383-af0f-abea4a49f6fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:22:49 crc kubenswrapper[4835]: I0216 15:22:49.076806 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:49 crc kubenswrapper[4835]: I0216 15:22:49.076857 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:49 crc kubenswrapper[4835]: E0216 15:22:49.077004 4835 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:22:49 crc kubenswrapper[4835]: E0216 15:22:49.077050 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs podName:1de323fb-2bae-44d3-a31a-07b0f2d9d53b nodeName:}" failed. No retries permitted until 2026-02-16 15:22:57.077036831 +0000 UTC m=+926.369029726 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs") pod "openstack-operator-controller-manager-54dd757795-lpkxx" (UID: "1de323fb-2bae-44d3-a31a-07b0f2d9d53b") : secret "webhook-server-cert" not found Feb 16 15:22:49 crc kubenswrapper[4835]: E0216 15:22:49.077166 4835 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:22:49 crc kubenswrapper[4835]: E0216 15:22:49.077257 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs podName:1de323fb-2bae-44d3-a31a-07b0f2d9d53b nodeName:}" failed. No retries permitted until 2026-02-16 15:22:57.077234536 +0000 UTC m=+926.369227441 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs") pod "openstack-operator-controller-manager-54dd757795-lpkxx" (UID: "1de323fb-2bae-44d3-a31a-07b0f2d9d53b") : secret "metrics-server-cert" not found Feb 16 15:22:54 crc kubenswrapper[4835]: E0216 15:22:54.932332 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 16 15:22:54 crc kubenswrapper[4835]: E0216 15:22:54.932967 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sscgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-zwf6h_openstack-operators(cd439265-4f6d-48db-bf6e-3353288aff58): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:22:54 crc kubenswrapper[4835]: E0216 15:22:54.934255 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-zwf6h" podUID="cd439265-4f6d-48db-bf6e-3353288aff58" Feb 16 15:22:55 crc kubenswrapper[4835]: E0216 15:22:55.648166 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-zwf6h" podUID="cd439265-4f6d-48db-bf6e-3353288aff58" Feb 16 15:22:56 crc kubenswrapper[4835]: I0216 15:22:56.290657 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert\") pod \"infra-operator-controller-manager-79d975b745-w9l9w\" (UID: \"68f68635-db35-44a5-8256-cea92b856a61\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" Feb 16 15:22:56 crc kubenswrapper[4835]: I0216 15:22:56.296614 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68f68635-db35-44a5-8256-cea92b856a61-cert\") pod \"infra-operator-controller-manager-79d975b745-w9l9w\" (UID: \"68f68635-db35-44a5-8256-cea92b856a61\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" Feb 16 15:22:56 crc kubenswrapper[4835]: I0216 15:22:56.472392 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" Feb 16 15:22:56 crc kubenswrapper[4835]: E0216 15:22:56.560935 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 16 15:22:56 crc kubenswrapper[4835]: E0216 15:22:56.561155 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6nplj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-xzw5m_openstack-operators(18703fe7-0c06-4977-9357-b9eff4ecdeba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:22:56 crc kubenswrapper[4835]: E0216 15:22:56.562376 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xzw5m" podUID="18703fe7-0c06-4977-9357-b9eff4ecdeba" Feb 16 15:22:56 crc kubenswrapper[4835]: E0216 15:22:56.654189 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xzw5m" podUID="18703fe7-0c06-4977-9357-b9eff4ecdeba" Feb 16 15:22:56 crc kubenswrapper[4835]: I0216 15:22:56.695841 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr\" (UID: \"2669aead-589f-4383-af0f-abea4a49f6fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" Feb 16 15:22:56 crc kubenswrapper[4835]: I0216 15:22:56.699264 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2669aead-589f-4383-af0f-abea4a49f6fd-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr\" (UID: \"2669aead-589f-4383-af0f-abea4a49f6fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" Feb 16 15:22:56 crc kubenswrapper[4835]: I0216 15:22:56.889064 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" Feb 16 15:22:57 crc kubenswrapper[4835]: I0216 15:22:57.031053 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-txwlq"] Feb 16 15:22:57 crc kubenswrapper[4835]: I0216 15:22:57.032579 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txwlq" Feb 16 15:22:57 crc kubenswrapper[4835]: I0216 15:22:57.041878 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-txwlq"] Feb 16 15:22:57 crc kubenswrapper[4835]: I0216 15:22:57.103409 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:57 crc kubenswrapper[4835]: I0216 15:22:57.103452 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:22:57 crc kubenswrapper[4835]: E0216 15:22:57.103614 4835 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:22:57 crc kubenswrapper[4835]: E0216 15:22:57.103650 4835 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:22:57 crc kubenswrapper[4835]: E0216 15:22:57.103688 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs podName:1de323fb-2bae-44d3-a31a-07b0f2d9d53b nodeName:}" failed. No retries permitted until 2026-02-16 15:23:13.103675295 +0000 UTC m=+942.395668190 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs") pod "openstack-operator-controller-manager-54dd757795-lpkxx" (UID: "1de323fb-2bae-44d3-a31a-07b0f2d9d53b") : secret "webhook-server-cert" not found Feb 16 15:22:57 crc kubenswrapper[4835]: E0216 15:22:57.103721 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs podName:1de323fb-2bae-44d3-a31a-07b0f2d9d53b nodeName:}" failed. No retries permitted until 2026-02-16 15:23:13.103702576 +0000 UTC m=+942.395695471 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs") pod "openstack-operator-controller-manager-54dd757795-lpkxx" (UID: "1de323fb-2bae-44d3-a31a-07b0f2d9d53b") : secret "metrics-server-cert" not found Feb 16 15:22:57 crc kubenswrapper[4835]: I0216 15:22:57.204623 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56fc4\" (UniqueName: \"kubernetes.io/projected/4455b323-7b78-42f7-ac34-5b74cd7ca45b-kube-api-access-56fc4\") pod \"certified-operators-txwlq\" (UID: \"4455b323-7b78-42f7-ac34-5b74cd7ca45b\") " pod="openshift-marketplace/certified-operators-txwlq" Feb 16 15:22:57 crc kubenswrapper[4835]: I0216 15:22:57.205017 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4455b323-7b78-42f7-ac34-5b74cd7ca45b-utilities\") pod \"certified-operators-txwlq\" (UID: \"4455b323-7b78-42f7-ac34-5b74cd7ca45b\") " pod="openshift-marketplace/certified-operators-txwlq" Feb 16 15:22:57 crc kubenswrapper[4835]: I0216 15:22:57.205043 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4455b323-7b78-42f7-ac34-5b74cd7ca45b-catalog-content\") pod \"certified-operators-txwlq\" (UID: \"4455b323-7b78-42f7-ac34-5b74cd7ca45b\") " pod="openshift-marketplace/certified-operators-txwlq" Feb 16 15:22:57 crc kubenswrapper[4835]: E0216 15:22:57.237265 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 16 15:22:57 crc kubenswrapper[4835]: E0216 15:22:57.237434 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2kt5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-v56sr_openstack-operators(f12b307e-157c-4aa1-91a5-2d55f2fa7def): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:22:57 crc kubenswrapper[4835]: E0216 15:22:57.238666 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-v56sr" podUID="f12b307e-157c-4aa1-91a5-2d55f2fa7def" Feb 16 15:22:57 crc kubenswrapper[4835]: I0216 15:22:57.313227 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56fc4\" (UniqueName: \"kubernetes.io/projected/4455b323-7b78-42f7-ac34-5b74cd7ca45b-kube-api-access-56fc4\") pod \"certified-operators-txwlq\" (UID: \"4455b323-7b78-42f7-ac34-5b74cd7ca45b\") " pod="openshift-marketplace/certified-operators-txwlq" Feb 16 15:22:57 crc kubenswrapper[4835]: I0216 15:22:57.313345 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4455b323-7b78-42f7-ac34-5b74cd7ca45b-utilities\") pod \"certified-operators-txwlq\" (UID: \"4455b323-7b78-42f7-ac34-5b74cd7ca45b\") " pod="openshift-marketplace/certified-operators-txwlq" Feb 16 15:22:57 crc kubenswrapper[4835]: I0216 15:22:57.313371 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4455b323-7b78-42f7-ac34-5b74cd7ca45b-catalog-content\") pod \"certified-operators-txwlq\" (UID: \"4455b323-7b78-42f7-ac34-5b74cd7ca45b\") " pod="openshift-marketplace/certified-operators-txwlq" Feb 16 15:22:57 crc kubenswrapper[4835]: I0216 15:22:57.314135 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4455b323-7b78-42f7-ac34-5b74cd7ca45b-catalog-content\") pod \"certified-operators-txwlq\" (UID: \"4455b323-7b78-42f7-ac34-5b74cd7ca45b\") " pod="openshift-marketplace/certified-operators-txwlq" Feb 16 15:22:57 crc kubenswrapper[4835]: I0216 15:22:57.314556 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4455b323-7b78-42f7-ac34-5b74cd7ca45b-utilities\") pod \"certified-operators-txwlq\" (UID: \"4455b323-7b78-42f7-ac34-5b74cd7ca45b\") " pod="openshift-marketplace/certified-operators-txwlq" Feb 16 15:22:57 crc kubenswrapper[4835]: I0216 15:22:57.335082 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56fc4\" (UniqueName: \"kubernetes.io/projected/4455b323-7b78-42f7-ac34-5b74cd7ca45b-kube-api-access-56fc4\") pod \"certified-operators-txwlq\" (UID: \"4455b323-7b78-42f7-ac34-5b74cd7ca45b\") " pod="openshift-marketplace/certified-operators-txwlq" Feb 16 15:22:57 crc kubenswrapper[4835]: I0216 15:22:57.355849 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txwlq" Feb 16 15:22:57 crc kubenswrapper[4835]: I0216 15:22:57.542401 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6xdbr"] Feb 16 15:22:57 crc kubenswrapper[4835]: E0216 15:22:57.663437 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-v56sr" podUID="f12b307e-157c-4aa1-91a5-2d55f2fa7def" Feb 16 15:23:00 crc kubenswrapper[4835]: I0216 15:23:00.415716 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nhdbv"] Feb 16 15:23:00 crc kubenswrapper[4835]: I0216 15:23:00.418258 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhdbv" Feb 16 15:23:00 crc kubenswrapper[4835]: I0216 15:23:00.430582 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhdbv"] Feb 16 15:23:00 crc kubenswrapper[4835]: I0216 15:23:00.488382 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7ab49da-31a8-43fa-8580-9c3a90687163-utilities\") pod \"redhat-marketplace-nhdbv\" (UID: \"e7ab49da-31a8-43fa-8580-9c3a90687163\") " pod="openshift-marketplace/redhat-marketplace-nhdbv" Feb 16 15:23:00 crc kubenswrapper[4835]: I0216 15:23:00.488456 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxhvn\" (UniqueName: \"kubernetes.io/projected/e7ab49da-31a8-43fa-8580-9c3a90687163-kube-api-access-qxhvn\") pod \"redhat-marketplace-nhdbv\" (UID: \"e7ab49da-31a8-43fa-8580-9c3a90687163\") " pod="openshift-marketplace/redhat-marketplace-nhdbv" Feb 16 15:23:00 crc kubenswrapper[4835]: I0216 15:23:00.488959 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7ab49da-31a8-43fa-8580-9c3a90687163-catalog-content\") pod \"redhat-marketplace-nhdbv\" (UID: \"e7ab49da-31a8-43fa-8580-9c3a90687163\") " pod="openshift-marketplace/redhat-marketplace-nhdbv" Feb 16 15:23:00 crc kubenswrapper[4835]: I0216 15:23:00.590167 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7ab49da-31a8-43fa-8580-9c3a90687163-catalog-content\") pod \"redhat-marketplace-nhdbv\" (UID: \"e7ab49da-31a8-43fa-8580-9c3a90687163\") " pod="openshift-marketplace/redhat-marketplace-nhdbv" Feb 16 15:23:00 crc kubenswrapper[4835]: I0216 15:23:00.590221 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7ab49da-31a8-43fa-8580-9c3a90687163-utilities\") pod \"redhat-marketplace-nhdbv\" (UID: \"e7ab49da-31a8-43fa-8580-9c3a90687163\") " pod="openshift-marketplace/redhat-marketplace-nhdbv" Feb 16 15:23:00 crc kubenswrapper[4835]: I0216 15:23:00.590254 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxhvn\" (UniqueName: \"kubernetes.io/projected/e7ab49da-31a8-43fa-8580-9c3a90687163-kube-api-access-qxhvn\") pod \"redhat-marketplace-nhdbv\" (UID: \"e7ab49da-31a8-43fa-8580-9c3a90687163\") " pod="openshift-marketplace/redhat-marketplace-nhdbv" Feb 16 15:23:00 crc kubenswrapper[4835]: I0216 15:23:00.590935 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7ab49da-31a8-43fa-8580-9c3a90687163-utilities\") pod \"redhat-marketplace-nhdbv\" (UID: \"e7ab49da-31a8-43fa-8580-9c3a90687163\") " pod="openshift-marketplace/redhat-marketplace-nhdbv" Feb 16 15:23:00 crc kubenswrapper[4835]: I0216 15:23:00.592016 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7ab49da-31a8-43fa-8580-9c3a90687163-catalog-content\") pod \"redhat-marketplace-nhdbv\" (UID: \"e7ab49da-31a8-43fa-8580-9c3a90687163\") " pod="openshift-marketplace/redhat-marketplace-nhdbv" Feb 16 15:23:00 crc kubenswrapper[4835]: I0216 15:23:00.612752 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxhvn\" (UniqueName: \"kubernetes.io/projected/e7ab49da-31a8-43fa-8580-9c3a90687163-kube-api-access-qxhvn\") pod \"redhat-marketplace-nhdbv\" (UID: \"e7ab49da-31a8-43fa-8580-9c3a90687163\") " pod="openshift-marketplace/redhat-marketplace-nhdbv" Feb 16 15:23:00 crc kubenswrapper[4835]: I0216 15:23:00.739060 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhdbv" Feb 16 15:23:02 crc kubenswrapper[4835]: I0216 15:23:02.706182 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xdbr" event={"ID":"3ff8ca48-8641-46ab-9353-7a1d0c649acd","Type":"ContainerStarted","Data":"6133f363274e6735d7a21f329ee208f001e1b2edcbf279ef0c8b705e17d440d7"} Feb 16 15:23:03 crc kubenswrapper[4835]: I0216 15:23:03.458241 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-txwlq"] Feb 16 15:23:04 crc kubenswrapper[4835]: I0216 15:23:04.180964 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w"] Feb 16 15:23:04 crc kubenswrapper[4835]: W0216 15:23:04.430961 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4455b323_7b78_42f7_ac34_5b74cd7ca45b.slice/crio-99f1fedfacb9bc3213a15e2b3d1f1739b829bd9ff2e4a20f4615bbc142b90345 WatchSource:0}: Error finding container 99f1fedfacb9bc3213a15e2b3d1f1739b829bd9ff2e4a20f4615bbc142b90345: Status 404 returned error can't find the container with id 99f1fedfacb9bc3213a15e2b3d1f1739b829bd9ff2e4a20f4615bbc142b90345 Feb 16 15:23:04 crc kubenswrapper[4835]: I0216 15:23:04.721461 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txwlq" event={"ID":"4455b323-7b78-42f7-ac34-5b74cd7ca45b","Type":"ContainerStarted","Data":"99f1fedfacb9bc3213a15e2b3d1f1739b829bd9ff2e4a20f4615bbc142b90345"} Feb 16 15:23:05 crc kubenswrapper[4835]: W0216 15:23:05.686265 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68f68635_db35_44a5_8256_cea92b856a61.slice/crio-6f32d8424e80ddffa29f2be82c4508ce349284e192cb8517d45ea83f59bc0e20 WatchSource:0}: Error finding container 6f32d8424e80ddffa29f2be82c4508ce349284e192cb8517d45ea83f59bc0e20: Status 404 returned error can't find the container with id 6f32d8424e80ddffa29f2be82c4508ce349284e192cb8517d45ea83f59bc0e20 Feb 16 15:23:05 crc kubenswrapper[4835]: I0216 15:23:05.737125 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" event={"ID":"68f68635-db35-44a5-8256-cea92b856a61","Type":"ContainerStarted","Data":"6f32d8424e80ddffa29f2be82c4508ce349284e192cb8517d45ea83f59bc0e20"} Feb 16 15:23:06 crc kubenswrapper[4835]: I0216 15:23:06.366375 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr"] Feb 16 15:23:06 crc kubenswrapper[4835]: I0216 15:23:06.371551 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhdbv"] Feb 16 15:23:06 crc kubenswrapper[4835]: W0216 15:23:06.389310 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7ab49da_31a8_43fa_8580_9c3a90687163.slice/crio-fa4b81010f733b8969add2547a4903324c7fb8d170a23810de58f3d18f646600 WatchSource:0}: Error finding container fa4b81010f733b8969add2547a4903324c7fb8d170a23810de58f3d18f646600: Status 404 returned error can't find the container with id fa4b81010f733b8969add2547a4903324c7fb8d170a23810de58f3d18f646600 Feb 16 15:23:06 crc kubenswrapper[4835]: I0216 15:23:06.752408 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhdbv" event={"ID":"e7ab49da-31a8-43fa-8580-9c3a90687163","Type":"ContainerStarted","Data":"fa4b81010f733b8969add2547a4903324c7fb8d170a23810de58f3d18f646600"} Feb 16 15:23:06 crc kubenswrapper[4835]: I0216 15:23:06.757018 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kwlf2" event={"ID":"a7d34cbf-f94d-4170-9937-b0d05d9785e2","Type":"ContainerStarted","Data":"a642f7df27a266d7260d499e427afa2f71f39224cd1e92aff219390b2aa1be0b"} Feb 16 15:23:06 crc kubenswrapper[4835]: I0216 15:23:06.768421 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-r658s" event={"ID":"c9835f90-5158-4aa7-9cc5-4d3a1e1feb63","Type":"ContainerStarted","Data":"e3a0fbcff3e583463f02365aaa20609bff3cfef8a57fb9d35e27614cbbd3242b"} Feb 16 15:23:06 crc kubenswrapper[4835]: I0216 15:23:06.768549 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-r658s" Feb 16 15:23:06 crc kubenswrapper[4835]: I0216 15:23:06.770171 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" event={"ID":"2669aead-589f-4383-af0f-abea4a49f6fd","Type":"ContainerStarted","Data":"9c42eb912e74265ee8f8e83201425e2973ab26d573b680165807f9617ed50dc2"} Feb 16 15:23:06 crc kubenswrapper[4835]: I0216 15:23:06.773716 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-mx5lg" event={"ID":"24f969bf-0ff8-4e85-a388-bde2f6ad68bb","Type":"ContainerStarted","Data":"1e2dacbeba4fc77929878565ff3f3738a2b1509324cc6827710d30e03b3b8511"} Feb 16 15:23:06 crc kubenswrapper[4835]: I0216 15:23:06.774314 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-mx5lg" Feb 16 15:23:06 crc kubenswrapper[4835]: I0216 15:23:06.789280 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-52427" event={"ID":"430b904f-ff3b-4b49-a212-7affb09621ef","Type":"ContainerStarted","Data":"974163e5b4b9d486a6eb3300e4265b4dda2e873998fca18dcfce627c802d23a9"} Feb 16 15:23:06 crc kubenswrapper[4835]: I0216 15:23:06.790011 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-52427" Feb 16 15:23:06 crc kubenswrapper[4835]: I0216 15:23:06.795909 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xdbr" event={"ID":"3ff8ca48-8641-46ab-9353-7a1d0c649acd","Type":"ContainerStarted","Data":"36effe000b872e9577e12c77413c6c66e82e030b810d2c775ca18b210c1a3101"} Feb 16 15:23:06 crc kubenswrapper[4835]: I0216 15:23:06.815231 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-r658s" podStartSLOduration=11.552041402 podStartE2EDuration="26.81520867s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:41.94637354 +0000 UTC m=+911.238366435" lastFinishedPulling="2026-02-16 15:22:57.209540808 +0000 UTC m=+926.501533703" observedRunningTime="2026-02-16 15:23:06.786801147 +0000 UTC m=+936.078794062" watchObservedRunningTime="2026-02-16 15:23:06.81520867 +0000 UTC m=+936.107201565" Feb 16 15:23:06 crc kubenswrapper[4835]: I0216 15:23:06.817350 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-mx5lg" podStartSLOduration=12.011565581 podStartE2EDuration="26.817341364s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:42.403584331 +0000 UTC m=+911.695577226" lastFinishedPulling="2026-02-16 15:22:57.209360114 +0000 UTC m=+926.501353009" observedRunningTime="2026-02-16 15:23:06.804313523 +0000 UTC m=+936.096306418" watchObservedRunningTime="2026-02-16 15:23:06.817341364 +0000 UTC m=+936.109334259" Feb 16 15:23:06 crc kubenswrapper[4835]: I0216 15:23:06.833687 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-52427" podStartSLOduration=11.962713688000001 podStartE2EDuration="26.833669339s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:42.338893165 +0000 UTC m=+911.630886060" lastFinishedPulling="2026-02-16 15:22:57.209848816 +0000 UTC m=+926.501841711" observedRunningTime="2026-02-16 15:23:06.828814306 +0000 UTC m=+936.120807201" watchObservedRunningTime="2026-02-16 15:23:06.833669339 +0000 UTC m=+936.125662234" Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.816324 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4g6lm" event={"ID":"4617e695-08f5-496c-92cc-496f6ce85441","Type":"ContainerStarted","Data":"88fc45d1b84f35ea226765a76035a69a9f61774d3677272f9341438b07591f1b"} Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.816668 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4g6lm" Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.823464 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-dztnx" event={"ID":"79d4f74b-f0e3-4a7e-b862-0e8f9d52e69b","Type":"ContainerStarted","Data":"c20bf689bb80a125badb006cb621030c6889dbb1cc131f5cfa39a81c11afb63f"} Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.823659 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-dztnx" Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.828274 4835 generic.go:334] "Generic (PLEG): container finished" podID="3ff8ca48-8641-46ab-9353-7a1d0c649acd" containerID="36effe000b872e9577e12c77413c6c66e82e030b810d2c775ca18b210c1a3101" exitCode=0 Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.828346 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xdbr" event={"ID":"3ff8ca48-8641-46ab-9353-7a1d0c649acd","Type":"ContainerDied","Data":"36effe000b872e9577e12c77413c6c66e82e030b810d2c775ca18b210c1a3101"} Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.837314 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-sf54k" event={"ID":"b3a956a8-c3c7-4ca9-b8d5-902e89252e7c","Type":"ContainerStarted","Data":"371ea8b82d34c4c03cc68b453eef079bd6e2707bc1ee2a43e2bf3fc04d73c4e4"} Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.837642 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-sf54k" Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.842396 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4g6lm" podStartSLOduration=12.76904131 podStartE2EDuration="27.84238186s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:42.13665949 +0000 UTC m=+911.428652385" lastFinishedPulling="2026-02-16 15:22:57.21000004 +0000 UTC m=+926.501992935" observedRunningTime="2026-02-16 15:23:07.838581704 +0000 UTC m=+937.130574609" watchObservedRunningTime="2026-02-16 15:23:07.84238186 +0000 UTC m=+937.134374755" Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.848158 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-29wr6" event={"ID":"e8e216ae-88d9-42e2-a387-b264904e7e20","Type":"ContainerStarted","Data":"bcf48a7c9a9f6f9158bc9287e2e979e4db195cf251ec55403b25bf0625600575"} Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.853105 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-n52fd" event={"ID":"e8ca5fe8-55ed-40a0-987e-59face1c1a19","Type":"ContainerStarted","Data":"cc5833a6a8526e619ea26913a37c152a5b3e2eebbfc3d136440463e274e7d0dd"} Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.853308 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-n52fd" Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.859545 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2k2dp" event={"ID":"76535411-ec0f-4b41-9d6e-084d72e4deec","Type":"ContainerStarted","Data":"645621863fcf885d97ee9732f36a08b49824f79029744e9e3975586dc2e7a792"} Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.859793 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2k2dp" Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.862809 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-724ns" event={"ID":"4155bc83-03aa-4e2d-a024-0967569539b4","Type":"ContainerStarted","Data":"c28482560f7d18d8c11a6d1384b5d85f6a90cfedc1ee63032709c17aaeee36c8"} Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.863476 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-724ns" Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.865328 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cljs5" event={"ID":"fa7014ac-eeb8-4e1e-a3d3-e852d2b6c765","Type":"ContainerStarted","Data":"0a9fb01fd490f8c6908d4c159327a24736825e286a826836e7269ed8efe0c17a"} Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.866076 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cljs5" Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.871195 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-68ssq" event={"ID":"91557fa8-9b53-43ea-b9bb-13117ee5d714","Type":"ContainerStarted","Data":"f031bdbf310e3c95330a3280c09563577b10611ebd1b4887d617ecba31b75844"} Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.871396 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-dztnx" podStartSLOduration=7.519913662 podStartE2EDuration="27.871385188s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:41.945678942 +0000 UTC m=+911.237671837" lastFinishedPulling="2026-02-16 15:23:02.297150468 +0000 UTC m=+931.589143363" observedRunningTime="2026-02-16 15:23:07.869922691 +0000 UTC m=+937.161915596" watchObservedRunningTime="2026-02-16 15:23:07.871385188 +0000 UTC m=+937.163378083" Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.871899 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-68ssq" Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.876659 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ztgc5" event={"ID":"af7a1a80-d93a-4587-adaa-dda2c307e344","Type":"ContainerStarted","Data":"3c6e978f8aa2d8578f945114e674d67d5312747629a8bce4534882e3a45d0c81"} Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.876690 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ztgc5" Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.881096 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4bhnk" event={"ID":"602f50b2-0fce-474b-a6ca-cbcdeaa8ff9e","Type":"ContainerStarted","Data":"ed01a6d2cb979bf6e9c3995e7d38f3080a011f3b18e8c637c1bf7aed8e3ad69c"} Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.881774 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4bhnk" Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.888856 4835 generic.go:334] "Generic (PLEG): container finished" podID="e7ab49da-31a8-43fa-8580-9c3a90687163" containerID="37d3be22c27715ef50311d8e735dc6e05c9e5fbf1775b9b8ed6ba2d028439f89" exitCode=0 Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.888917 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhdbv" event={"ID":"e7ab49da-31a8-43fa-8580-9c3a90687163","Type":"ContainerDied","Data":"37d3be22c27715ef50311d8e735dc6e05c9e5fbf1775b9b8ed6ba2d028439f89"} Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.901803 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-sf54k" podStartSLOduration=7.9566850030000005 podStartE2EDuration="27.901786222s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:42.351630269 +0000 UTC m=+911.643623164" lastFinishedPulling="2026-02-16 15:23:02.296731488 +0000 UTC m=+931.588724383" observedRunningTime="2026-02-16 15:23:07.896479447 +0000 UTC m=+937.188472372" watchObservedRunningTime="2026-02-16 15:23:07.901786222 +0000 UTC m=+937.193779107" Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.911420 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-p7bhp" event={"ID":"e9548890-1ce7-42a6-a870-ae0727b81a68","Type":"ContainerStarted","Data":"31a3aa4f605e2fdffc4788dd7a612a2f59a664ab65a577f85d0545583acc4ae8"} Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.912195 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-p7bhp" Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.926292 4835 generic.go:334] "Generic (PLEG): container finished" podID="4455b323-7b78-42f7-ac34-5b74cd7ca45b" containerID="a6c0bf891ce44988620ab845cbf3a5103297dd1e5c005ccebdd0c98e7cc027c8" exitCode=0 Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.926356 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txwlq" event={"ID":"4455b323-7b78-42f7-ac34-5b74cd7ca45b","Type":"ContainerDied","Data":"a6c0bf891ce44988620ab845cbf3a5103297dd1e5c005ccebdd0c98e7cc027c8"} Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.984970 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5cw4" event={"ID":"40e93ce0-f7ad-4fc7-ab2d-5fcf556bd256","Type":"ContainerStarted","Data":"448dfee582534c2e0ea63b1973095d1a9142129881905cda1ebbc113c42d84ce"} Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.985421 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5cw4" Feb 16 15:23:07 crc kubenswrapper[4835]: I0216 15:23:07.986386 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kwlf2" Feb 16 15:23:08 crc kubenswrapper[4835]: I0216 15:23:08.187159 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-724ns" podStartSLOduration=13.121098847 podStartE2EDuration="28.187143751s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:42.143676049 +0000 UTC m=+911.435668944" lastFinishedPulling="2026-02-16 15:22:57.209720953 +0000 UTC m=+926.501713848" observedRunningTime="2026-02-16 15:23:08.178688546 +0000 UTC m=+937.470681441" watchObservedRunningTime="2026-02-16 15:23:08.187143751 +0000 UTC m=+937.479136646" Feb 16 15:23:08 crc kubenswrapper[4835]: I0216 15:23:08.230768 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-68ssq" podStartSLOduration=4.259299527 podStartE2EDuration="28.23074949s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:42.392980971 +0000 UTC m=+911.684973866" lastFinishedPulling="2026-02-16 15:23:06.364430924 +0000 UTC m=+935.656423829" observedRunningTime="2026-02-16 15:23:08.227027885 +0000 UTC m=+937.519020780" watchObservedRunningTime="2026-02-16 15:23:08.23074949 +0000 UTC m=+937.522742385" Feb 16 15:23:08 crc kubenswrapper[4835]: I0216 15:23:08.359480 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kwlf2" podStartSLOduration=13.124230426 podStartE2EDuration="28.359461514s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:41.974571487 +0000 UTC m=+911.266564372" lastFinishedPulling="2026-02-16 15:22:57.209802565 +0000 UTC m=+926.501795460" observedRunningTime="2026-02-16 15:23:08.351929083 +0000 UTC m=+937.643921978" watchObservedRunningTime="2026-02-16 15:23:08.359461514 +0000 UTC m=+937.651454409" Feb 16 15:23:08 crc kubenswrapper[4835]: I0216 15:23:08.364038 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5cw4" podStartSLOduration=5.053252016 podStartE2EDuration="28.364028751s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:42.420807879 +0000 UTC m=+911.712800774" lastFinishedPulling="2026-02-16 15:23:05.731584614 +0000 UTC m=+935.023577509" observedRunningTime="2026-02-16 15:23:08.307715208 +0000 UTC m=+937.599708103" watchObservedRunningTime="2026-02-16 15:23:08.364028751 +0000 UTC m=+937.656021646" Feb 16 15:23:08 crc kubenswrapper[4835]: I0216 15:23:08.430688 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-29wr6" podStartSLOduration=2.938341144 podStartE2EDuration="27.430672856s" podCreationTimestamp="2026-02-16 15:22:41 +0000 UTC" firstStartedPulling="2026-02-16 15:22:42.396272685 +0000 UTC m=+911.688265570" lastFinishedPulling="2026-02-16 15:23:06.888604387 +0000 UTC m=+936.180597282" observedRunningTime="2026-02-16 15:23:08.422773675 +0000 UTC m=+937.714766570" watchObservedRunningTime="2026-02-16 15:23:08.430672856 +0000 UTC m=+937.722665751" Feb 16 15:23:08 crc kubenswrapper[4835]: I0216 15:23:08.431024 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ztgc5" podStartSLOduration=8.4687667 podStartE2EDuration="28.431018365s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:42.339652024 +0000 UTC m=+911.631644919" lastFinishedPulling="2026-02-16 15:23:02.301903689 +0000 UTC m=+931.593896584" observedRunningTime="2026-02-16 15:23:08.389254192 +0000 UTC m=+937.681247087" watchObservedRunningTime="2026-02-16 15:23:08.431018365 +0000 UTC m=+937.723011260" Feb 16 15:23:08 crc kubenswrapper[4835]: I0216 15:23:08.452491 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-p7bhp" podStartSLOduration=4.403895626 podStartE2EDuration="28.45246949s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:42.392802376 +0000 UTC m=+911.684795271" lastFinishedPulling="2026-02-16 15:23:06.44137624 +0000 UTC m=+935.733369135" observedRunningTime="2026-02-16 15:23:08.447084284 +0000 UTC m=+937.739077179" watchObservedRunningTime="2026-02-16 15:23:08.45246949 +0000 UTC m=+937.744462375" Feb 16 15:23:08 crc kubenswrapper[4835]: I0216 15:23:08.494983 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2k2dp" podStartSLOduration=4.46694477 podStartE2EDuration="28.494965261s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:42.405704034 +0000 UTC m=+911.697696929" lastFinishedPulling="2026-02-16 15:23:06.433724525 +0000 UTC m=+935.725717420" observedRunningTime="2026-02-16 15:23:08.487369018 +0000 UTC m=+937.779361913" watchObservedRunningTime="2026-02-16 15:23:08.494965261 +0000 UTC m=+937.786958156" Feb 16 15:23:08 crc kubenswrapper[4835]: I0216 15:23:08.581397 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-n52fd" podStartSLOduration=7.23005846 podStartE2EDuration="28.58137829s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:42.409988633 +0000 UTC m=+911.701981518" lastFinishedPulling="2026-02-16 15:23:03.761308453 +0000 UTC m=+933.053301348" observedRunningTime="2026-02-16 15:23:08.542150942 +0000 UTC m=+937.834143837" watchObservedRunningTime="2026-02-16 15:23:08.58137829 +0000 UTC m=+937.873371185" Feb 16 15:23:08 crc kubenswrapper[4835]: I0216 15:23:08.626413 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cljs5" podStartSLOduration=13.215003066 podStartE2EDuration="28.626394095s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:41.798060617 +0000 UTC m=+911.090053512" lastFinishedPulling="2026-02-16 15:22:57.209451626 +0000 UTC m=+926.501444541" observedRunningTime="2026-02-16 15:23:08.586959472 +0000 UTC m=+937.878952367" watchObservedRunningTime="2026-02-16 15:23:08.626394095 +0000 UTC m=+937.918386990" Feb 16 15:23:08 crc kubenswrapper[4835]: I0216 15:23:08.672100 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4bhnk" podStartSLOduration=4.695811202 podStartE2EDuration="28.672082797s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:42.405090509 +0000 UTC m=+911.697083404" lastFinishedPulling="2026-02-16 15:23:06.381362114 +0000 UTC m=+935.673354999" observedRunningTime="2026-02-16 15:23:08.669352198 +0000 UTC m=+937.961345093" watchObservedRunningTime="2026-02-16 15:23:08.672082797 +0000 UTC m=+937.964075692" Feb 16 15:23:08 crc kubenswrapper[4835]: I0216 15:23:08.998115 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xdbr" event={"ID":"3ff8ca48-8641-46ab-9353-7a1d0c649acd","Type":"ContainerStarted","Data":"cfc0011eb3e385e066c845bdbc07e5e887b27695464bcf955c3070bccaf420a2"} Feb 16 15:23:09 crc kubenswrapper[4835]: I0216 15:23:09.001188 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhdbv" event={"ID":"e7ab49da-31a8-43fa-8580-9c3a90687163","Type":"ContainerStarted","Data":"32fa276e0ad82500e14094f8be1de61ac34ccc43c12d45bae4c854d10927532d"} Feb 16 15:23:09 crc kubenswrapper[4835]: I0216 15:23:09.006140 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-zwf6h" event={"ID":"cd439265-4f6d-48db-bf6e-3353288aff58","Type":"ContainerStarted","Data":"dafc70e69a2801b3428ca08a3923f8ab00b428f72c0112cb52fd196fb841f4c1"} Feb 16 15:23:09 crc kubenswrapper[4835]: I0216 15:23:09.007344 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-zwf6h" Feb 16 15:23:09 crc kubenswrapper[4835]: I0216 15:23:09.053753 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-zwf6h" podStartSLOduration=2.977657223 podStartE2EDuration="29.053735705s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:41.945652801 +0000 UTC m=+911.237645696" lastFinishedPulling="2026-02-16 15:23:08.021731293 +0000 UTC m=+937.313724178" observedRunningTime="2026-02-16 15:23:09.047354523 +0000 UTC m=+938.339347418" watchObservedRunningTime="2026-02-16 15:23:09.053735705 +0000 UTC m=+938.345728600" Feb 16 15:23:10 crc kubenswrapper[4835]: I0216 15:23:10.014121 4835 generic.go:334] "Generic (PLEG): container finished" podID="4455b323-7b78-42f7-ac34-5b74cd7ca45b" containerID="7fe4e69e2b88328b4c82371ccb69c959868315a1472c165a4b6a516eaeddf045" exitCode=0 Feb 16 15:23:10 crc kubenswrapper[4835]: I0216 15:23:10.014313 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txwlq" event={"ID":"4455b323-7b78-42f7-ac34-5b74cd7ca45b","Type":"ContainerDied","Data":"7fe4e69e2b88328b4c82371ccb69c959868315a1472c165a4b6a516eaeddf045"} Feb 16 15:23:10 crc kubenswrapper[4835]: I0216 15:23:10.016194 4835 generic.go:334] "Generic (PLEG): container finished" podID="3ff8ca48-8641-46ab-9353-7a1d0c649acd" containerID="cfc0011eb3e385e066c845bdbc07e5e887b27695464bcf955c3070bccaf420a2" exitCode=0 Feb 16 15:23:10 crc kubenswrapper[4835]: I0216 15:23:10.016240 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xdbr" event={"ID":"3ff8ca48-8641-46ab-9353-7a1d0c649acd","Type":"ContainerDied","Data":"cfc0011eb3e385e066c845bdbc07e5e887b27695464bcf955c3070bccaf420a2"} Feb 16 15:23:10 crc kubenswrapper[4835]: I0216 15:23:10.019000 4835 generic.go:334] "Generic (PLEG): container finished" podID="e7ab49da-31a8-43fa-8580-9c3a90687163" containerID="32fa276e0ad82500e14094f8be1de61ac34ccc43c12d45bae4c854d10927532d" exitCode=0 Feb 16 15:23:10 crc kubenswrapper[4835]: I0216 15:23:10.019721 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhdbv" event={"ID":"e7ab49da-31a8-43fa-8580-9c3a90687163","Type":"ContainerDied","Data":"32fa276e0ad82500e14094f8be1de61ac34ccc43c12d45bae4c854d10927532d"} Feb 16 15:23:11 crc kubenswrapper[4835]: I0216 15:23:11.014928 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-4g6lm" Feb 16 15:23:11 crc kubenswrapper[4835]: I0216 15:23:11.191461 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-52427" Feb 16 15:23:11 crc kubenswrapper[4835]: I0216 15:23:11.613417 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-mx5lg" Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.040144 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhdbv" event={"ID":"e7ab49da-31a8-43fa-8580-9c3a90687163","Type":"ContainerStarted","Data":"7b5787ee2a60e991b819bb6fec2a4c22d666d5ca881c2e6a6f73877cd05252a7"} Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.041653 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" event={"ID":"68f68635-db35-44a5-8256-cea92b856a61","Type":"ContainerStarted","Data":"4607baf28f80f3b38b0178c5dc992f4a76c1ec7a8db597e48edac7e48997b71d"} Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.042331 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.044078 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txwlq" event={"ID":"4455b323-7b78-42f7-ac34-5b74cd7ca45b","Type":"ContainerStarted","Data":"b96a686b5770e1a469026af0513baa866697b44754b8e6edfe5532f9e2155709"} Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.045738 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xzw5m" event={"ID":"18703fe7-0c06-4977-9357-b9eff4ecdeba","Type":"ContainerStarted","Data":"c4a75b346578b4047b6a7d1232fac30eaec4902849435f7173284ea51a03c247"} Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.046219 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xzw5m" Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.047492 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" event={"ID":"2669aead-589f-4383-af0f-abea4a49f6fd","Type":"ContainerStarted","Data":"04679493b8383f4ea0a4296031a6996e2d2375488749bc986a2951f806b8512e"} Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.047592 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.050565 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xdbr" event={"ID":"3ff8ca48-8641-46ab-9353-7a1d0c649acd","Type":"ContainerStarted","Data":"26be561eb114b132f493efe749ddd217024a706ee28857ad0863aae771210a0f"} Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.068695 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nhdbv" podStartSLOduration=8.652309114 podStartE2EDuration="13.068678609s" podCreationTimestamp="2026-02-16 15:23:00 +0000 UTC" firstStartedPulling="2026-02-16 15:23:07.895887712 +0000 UTC m=+937.187880607" lastFinishedPulling="2026-02-16 15:23:12.312257207 +0000 UTC m=+941.604250102" observedRunningTime="2026-02-16 15:23:13.067802507 +0000 UTC m=+942.359795402" watchObservedRunningTime="2026-02-16 15:23:13.068678609 +0000 UTC m=+942.360671504" Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.086300 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" podStartSLOduration=26.503226404 podStartE2EDuration="33.086283717s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:23:05.69208819 +0000 UTC m=+934.984081085" lastFinishedPulling="2026-02-16 15:23:12.275145513 +0000 UTC m=+941.567138398" observedRunningTime="2026-02-16 15:23:13.079852853 +0000 UTC m=+942.371845748" watchObservedRunningTime="2026-02-16 15:23:13.086283717 +0000 UTC m=+942.378276612" Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.115548 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6xdbr" podStartSLOduration=23.674432217 podStartE2EDuration="28.115517691s" podCreationTimestamp="2026-02-16 15:22:45 +0000 UTC" firstStartedPulling="2026-02-16 15:23:07.829512593 +0000 UTC m=+937.121505488" lastFinishedPulling="2026-02-16 15:23:12.270598057 +0000 UTC m=+941.562590962" observedRunningTime="2026-02-16 15:23:13.109626941 +0000 UTC m=+942.401619836" watchObservedRunningTime="2026-02-16 15:23:13.115517691 +0000 UTC m=+942.407510586" Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.155495 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.155743 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.178666 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-metrics-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.182459 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1de323fb-2bae-44d3-a31a-07b0f2d9d53b-webhook-certs\") pod \"openstack-operator-controller-manager-54dd757795-lpkxx\" (UID: \"1de323fb-2bae-44d3-a31a-07b0f2d9d53b\") " pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.203076 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" podStartSLOduration=27.315350633 podStartE2EDuration="33.203046867s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:23:06.383903769 +0000 UTC m=+935.675896664" lastFinishedPulling="2026-02-16 15:23:12.271599973 +0000 UTC m=+941.563592898" observedRunningTime="2026-02-16 15:23:13.17091921 +0000 UTC m=+942.462912115" watchObservedRunningTime="2026-02-16 15:23:13.203046867 +0000 UTC m=+942.495039762" Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.209583 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xzw5m" podStartSLOduration=2.926914163 podStartE2EDuration="33.209569383s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:42.137226635 +0000 UTC m=+911.429219530" lastFinishedPulling="2026-02-16 15:23:12.419881835 +0000 UTC m=+941.711874750" observedRunningTime="2026-02-16 15:23:13.195012793 +0000 UTC m=+942.487005688" watchObservedRunningTime="2026-02-16 15:23:13.209569383 +0000 UTC m=+942.501562278" Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.222519 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-txwlq" podStartSLOduration=11.886218093 podStartE2EDuration="16.222501032s" podCreationTimestamp="2026-02-16 15:22:57 +0000 UTC" firstStartedPulling="2026-02-16 15:23:08.002888513 +0000 UTC m=+937.294881408" lastFinishedPulling="2026-02-16 15:23:12.339171432 +0000 UTC m=+941.631164347" observedRunningTime="2026-02-16 15:23:13.21613463 +0000 UTC m=+942.508127515" watchObservedRunningTime="2026-02-16 15:23:13.222501032 +0000 UTC m=+942.514493927" Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.463967 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:23:13 crc kubenswrapper[4835]: I0216 15:23:13.704646 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx"] Feb 16 15:23:14 crc kubenswrapper[4835]: I0216 15:23:14.057025 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-v56sr" event={"ID":"f12b307e-157c-4aa1-91a5-2d55f2fa7def","Type":"ContainerStarted","Data":"b50c3a3b50009af1a252f5861cda2cd8239bd084752641a007da4a35426b6f06"} Feb 16 15:23:14 crc kubenswrapper[4835]: I0216 15:23:14.057990 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-v56sr" Feb 16 15:23:14 crc kubenswrapper[4835]: I0216 15:23:14.059686 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" event={"ID":"1de323fb-2bae-44d3-a31a-07b0f2d9d53b","Type":"ContainerStarted","Data":"683f9bf73ff6b95c07080808499f770ed8e553fe21be2d7204d4da585c095658"} Feb 16 15:23:14 crc kubenswrapper[4835]: I0216 15:23:14.059732 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:23:14 crc kubenswrapper[4835]: I0216 15:23:14.059742 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" event={"ID":"1de323fb-2bae-44d3-a31a-07b0f2d9d53b","Type":"ContainerStarted","Data":"67e0675229259f39ed7cd211d6a58129d76f75c98b5dac1a1a76b39708383843"} Feb 16 15:23:14 crc kubenswrapper[4835]: I0216 15:23:14.086321 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-v56sr" podStartSLOduration=2.424281167 podStartE2EDuration="34.086305155s" podCreationTimestamp="2026-02-16 15:22:40 +0000 UTC" firstStartedPulling="2026-02-16 15:22:42.144810988 +0000 UTC m=+911.436803883" lastFinishedPulling="2026-02-16 15:23:13.806834976 +0000 UTC m=+943.098827871" observedRunningTime="2026-02-16 15:23:14.083827922 +0000 UTC m=+943.375820817" watchObservedRunningTime="2026-02-16 15:23:14.086305155 +0000 UTC m=+943.378298050" Feb 16 15:23:14 crc kubenswrapper[4835]: I0216 15:23:14.122818 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" podStartSLOduration=33.122801204 podStartE2EDuration="33.122801204s" podCreationTimestamp="2026-02-16 15:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:23:14.116160035 +0000 UTC m=+943.408152930" watchObservedRunningTime="2026-02-16 15:23:14.122801204 +0000 UTC m=+943.414794099" Feb 16 15:23:16 crc kubenswrapper[4835]: I0216 15:23:16.236613 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6xdbr" Feb 16 15:23:16 crc kubenswrapper[4835]: I0216 15:23:16.237639 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6xdbr" Feb 16 15:23:16 crc kubenswrapper[4835]: I0216 15:23:16.282259 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6xdbr" Feb 16 15:23:17 crc kubenswrapper[4835]: I0216 15:23:17.094136 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr" Feb 16 15:23:17 crc kubenswrapper[4835]: I0216 15:23:17.156591 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6xdbr" Feb 16 15:23:17 crc kubenswrapper[4835]: I0216 15:23:17.357951 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-txwlq" Feb 16 15:23:17 crc kubenswrapper[4835]: I0216 15:23:17.357989 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-txwlq" Feb 16 15:23:17 crc kubenswrapper[4835]: I0216 15:23:17.396937 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-txwlq" Feb 16 15:23:18 crc kubenswrapper[4835]: I0216 15:23:18.150124 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-txwlq" Feb 16 15:23:18 crc kubenswrapper[4835]: I0216 15:23:18.586799 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:23:18 crc kubenswrapper[4835]: I0216 15:23:18.586889 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:23:18 crc kubenswrapper[4835]: I0216 15:23:18.586955 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:23:18 crc kubenswrapper[4835]: I0216 15:23:18.587811 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fccec89c350093c4d7a854530c72eda475f0a4084457fa6bd80b80278b734735"} pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:23:18 crc kubenswrapper[4835]: I0216 15:23:18.587910 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" containerID="cri-o://fccec89c350093c4d7a854530c72eda475f0a4084457fa6bd80b80278b734735" gracePeriod=600 Feb 16 15:23:19 crc kubenswrapper[4835]: I0216 15:23:19.112015 4835 generic.go:334] "Generic (PLEG): container finished" podID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerID="fccec89c350093c4d7a854530c72eda475f0a4084457fa6bd80b80278b734735" exitCode=0 Feb 16 15:23:19 crc kubenswrapper[4835]: I0216 15:23:19.112085 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerDied","Data":"fccec89c350093c4d7a854530c72eda475f0a4084457fa6bd80b80278b734735"} Feb 16 15:23:19 crc kubenswrapper[4835]: I0216 15:23:19.112346 4835 scope.go:117] "RemoveContainer" containerID="73c350b3ac02f46ce7b27cf1db88e1f50effcba02c6c2d6096a643a3b0037668" Feb 16 15:23:19 crc kubenswrapper[4835]: I0216 15:23:19.404084 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6xdbr"] Feb 16 15:23:19 crc kubenswrapper[4835]: I0216 15:23:19.404337 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6xdbr" podUID="3ff8ca48-8641-46ab-9353-7a1d0c649acd" containerName="registry-server" containerID="cri-o://26be561eb114b132f493efe749ddd217024a706ee28857ad0863aae771210a0f" gracePeriod=2 Feb 16 15:23:20 crc kubenswrapper[4835]: I0216 15:23:20.005200 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-txwlq"] Feb 16 15:23:20 crc kubenswrapper[4835]: I0216 15:23:20.121170 4835 generic.go:334] "Generic (PLEG): container finished" podID="3ff8ca48-8641-46ab-9353-7a1d0c649acd" containerID="26be561eb114b132f493efe749ddd217024a706ee28857ad0863aae771210a0f" exitCode=0 Feb 16 15:23:20 crc kubenswrapper[4835]: I0216 15:23:20.121226 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xdbr" event={"ID":"3ff8ca48-8641-46ab-9353-7a1d0c649acd","Type":"ContainerDied","Data":"26be561eb114b132f493efe749ddd217024a706ee28857ad0863aae771210a0f"} Feb 16 15:23:20 crc kubenswrapper[4835]: I0216 15:23:20.121427 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-txwlq" podUID="4455b323-7b78-42f7-ac34-5b74cd7ca45b" containerName="registry-server" containerID="cri-o://b96a686b5770e1a469026af0513baa866697b44754b8e6edfe5532f9e2155709" gracePeriod=2 Feb 16 15:23:20 crc kubenswrapper[4835]: I0216 15:23:20.542354 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cljs5" Feb 16 15:23:20 crc kubenswrapper[4835]: I0216 15:23:20.570075 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-kwlf2" Feb 16 15:23:20 crc kubenswrapper[4835]: I0216 15:23:20.575890 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-r658s" Feb 16 15:23:20 crc kubenswrapper[4835]: I0216 15:23:20.654261 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-dztnx" Feb 16 15:23:20 crc kubenswrapper[4835]: I0216 15:23:20.678626 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-zwf6h" Feb 16 15:23:20 crc kubenswrapper[4835]: I0216 15:23:20.739590 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nhdbv" Feb 16 15:23:20 crc kubenswrapper[4835]: I0216 15:23:20.739658 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nhdbv" Feb 16 15:23:20 crc kubenswrapper[4835]: I0216 15:23:20.783114 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nhdbv" Feb 16 15:23:20 crc kubenswrapper[4835]: I0216 15:23:20.840219 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-p7bhp" Feb 16 15:23:20 crc kubenswrapper[4835]: I0216 15:23:20.965143 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-724ns" Feb 16 15:23:21 crc kubenswrapper[4835]: I0216 15:23:21.014841 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-xzw5m" Feb 16 15:23:21 crc kubenswrapper[4835]: I0216 15:23:21.046423 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ztgc5" Feb 16 15:23:21 crc kubenswrapper[4835]: I0216 15:23:21.087208 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-sf54k" Feb 16 15:23:21 crc kubenswrapper[4835]: I0216 15:23:21.124963 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-v56sr" Feb 16 15:23:21 crc kubenswrapper[4835]: I0216 15:23:21.136218 4835 generic.go:334] "Generic (PLEG): container finished" podID="4455b323-7b78-42f7-ac34-5b74cd7ca45b" containerID="b96a686b5770e1a469026af0513baa866697b44754b8e6edfe5532f9e2155709" exitCode=0 Feb 16 15:23:21 crc kubenswrapper[4835]: I0216 15:23:21.136630 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txwlq" event={"ID":"4455b323-7b78-42f7-ac34-5b74cd7ca45b","Type":"ContainerDied","Data":"b96a686b5770e1a469026af0513baa866697b44754b8e6edfe5532f9e2155709"} Feb 16 15:23:21 crc kubenswrapper[4835]: I0216 15:23:21.159097 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4bhnk" Feb 16 15:23:21 crc kubenswrapper[4835]: I0216 15:23:21.196602 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nhdbv" Feb 16 15:23:21 crc kubenswrapper[4835]: I0216 15:23:21.201450 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-n52fd" Feb 16 15:23:21 crc kubenswrapper[4835]: I0216 15:23:21.447618 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-g5cw4" Feb 16 15:23:21 crc kubenswrapper[4835]: I0216 15:23:21.537300 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5884f785c-68ssq" Feb 16 15:23:21 crc kubenswrapper[4835]: I0216 15:23:21.624489 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-2k2dp" Feb 16 15:23:21 crc kubenswrapper[4835]: I0216 15:23:21.998023 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xdbr" Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.144703 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6xdbr" event={"ID":"3ff8ca48-8641-46ab-9353-7a1d0c649acd","Type":"ContainerDied","Data":"6133f363274e6735d7a21f329ee208f001e1b2edcbf279ef0c8b705e17d440d7"} Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.144701 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6xdbr" Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.145688 4835 scope.go:117] "RemoveContainer" containerID="26be561eb114b132f493efe749ddd217024a706ee28857ad0863aae771210a0f" Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.147827 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerStarted","Data":"55a8425e60a5ca5af019911f05c32c6de22275f80b64e52b734846168a32e3b3"} Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.165438 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff8ca48-8641-46ab-9353-7a1d0c649acd-utilities\") pod \"3ff8ca48-8641-46ab-9353-7a1d0c649acd\" (UID: \"3ff8ca48-8641-46ab-9353-7a1d0c649acd\") " Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.165558 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5b4q\" (UniqueName: \"kubernetes.io/projected/3ff8ca48-8641-46ab-9353-7a1d0c649acd-kube-api-access-z5b4q\") pod \"3ff8ca48-8641-46ab-9353-7a1d0c649acd\" (UID: \"3ff8ca48-8641-46ab-9353-7a1d0c649acd\") " Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.165600 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff8ca48-8641-46ab-9353-7a1d0c649acd-catalog-content\") pod \"3ff8ca48-8641-46ab-9353-7a1d0c649acd\" (UID: \"3ff8ca48-8641-46ab-9353-7a1d0c649acd\") " Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.166740 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ff8ca48-8641-46ab-9353-7a1d0c649acd-utilities" (OuterVolumeSpecName: "utilities") pod "3ff8ca48-8641-46ab-9353-7a1d0c649acd" (UID: "3ff8ca48-8641-46ab-9353-7a1d0c649acd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.172756 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ff8ca48-8641-46ab-9353-7a1d0c649acd-kube-api-access-z5b4q" (OuterVolumeSpecName: "kube-api-access-z5b4q") pod "3ff8ca48-8641-46ab-9353-7a1d0c649acd" (UID: "3ff8ca48-8641-46ab-9353-7a1d0c649acd"). InnerVolumeSpecName "kube-api-access-z5b4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.191002 4835 scope.go:117] "RemoveContainer" containerID="cfc0011eb3e385e066c845bdbc07e5e887b27695464bcf955c3070bccaf420a2" Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.232218 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ff8ca48-8641-46ab-9353-7a1d0c649acd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ff8ca48-8641-46ab-9353-7a1d0c649acd" (UID: "3ff8ca48-8641-46ab-9353-7a1d0c649acd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.233497 4835 scope.go:117] "RemoveContainer" containerID="36effe000b872e9577e12c77413c6c66e82e030b810d2c775ca18b210c1a3101" Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.267694 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff8ca48-8641-46ab-9353-7a1d0c649acd-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.268291 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5b4q\" (UniqueName: \"kubernetes.io/projected/3ff8ca48-8641-46ab-9353-7a1d0c649acd-kube-api-access-z5b4q\") on node \"crc\" DevicePath \"\"" Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.268343 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff8ca48-8641-46ab-9353-7a1d0c649acd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.285688 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txwlq" Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.470996 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4455b323-7b78-42f7-ac34-5b74cd7ca45b-catalog-content\") pod \"4455b323-7b78-42f7-ac34-5b74cd7ca45b\" (UID: \"4455b323-7b78-42f7-ac34-5b74cd7ca45b\") " Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.471069 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56fc4\" (UniqueName: \"kubernetes.io/projected/4455b323-7b78-42f7-ac34-5b74cd7ca45b-kube-api-access-56fc4\") pod \"4455b323-7b78-42f7-ac34-5b74cd7ca45b\" (UID: \"4455b323-7b78-42f7-ac34-5b74cd7ca45b\") " Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.471170 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4455b323-7b78-42f7-ac34-5b74cd7ca45b-utilities\") pod \"4455b323-7b78-42f7-ac34-5b74cd7ca45b\" (UID: \"4455b323-7b78-42f7-ac34-5b74cd7ca45b\") " Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.472183 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4455b323-7b78-42f7-ac34-5b74cd7ca45b-utilities" (OuterVolumeSpecName: "utilities") pod "4455b323-7b78-42f7-ac34-5b74cd7ca45b" (UID: "4455b323-7b78-42f7-ac34-5b74cd7ca45b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.475503 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4455b323-7b78-42f7-ac34-5b74cd7ca45b-kube-api-access-56fc4" (OuterVolumeSpecName: "kube-api-access-56fc4") pod "4455b323-7b78-42f7-ac34-5b74cd7ca45b" (UID: "4455b323-7b78-42f7-ac34-5b74cd7ca45b"). InnerVolumeSpecName "kube-api-access-56fc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.478011 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6xdbr"] Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.494164 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6xdbr"] Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.535745 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4455b323-7b78-42f7-ac34-5b74cd7ca45b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4455b323-7b78-42f7-ac34-5b74cd7ca45b" (UID: "4455b323-7b78-42f7-ac34-5b74cd7ca45b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.572901 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4455b323-7b78-42f7-ac34-5b74cd7ca45b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.572941 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4455b323-7b78-42f7-ac34-5b74cd7ca45b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:23:22 crc kubenswrapper[4835]: I0216 15:23:22.572957 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56fc4\" (UniqueName: \"kubernetes.io/projected/4455b323-7b78-42f7-ac34-5b74cd7ca45b-kube-api-access-56fc4\") on node \"crc\" DevicePath \"\"" Feb 16 15:23:23 crc kubenswrapper[4835]: I0216 15:23:23.157316 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-txwlq" Feb 16 15:23:23 crc kubenswrapper[4835]: I0216 15:23:23.157311 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-txwlq" event={"ID":"4455b323-7b78-42f7-ac34-5b74cd7ca45b","Type":"ContainerDied","Data":"99f1fedfacb9bc3213a15e2b3d1f1739b829bd9ff2e4a20f4615bbc142b90345"} Feb 16 15:23:23 crc kubenswrapper[4835]: I0216 15:23:23.157869 4835 scope.go:117] "RemoveContainer" containerID="b96a686b5770e1a469026af0513baa866697b44754b8e6edfe5532f9e2155709" Feb 16 15:23:23 crc kubenswrapper[4835]: I0216 15:23:23.176364 4835 scope.go:117] "RemoveContainer" containerID="7fe4e69e2b88328b4c82371ccb69c959868315a1472c165a4b6a516eaeddf045" Feb 16 15:23:23 crc kubenswrapper[4835]: I0216 15:23:23.197756 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-txwlq"] Feb 16 15:23:23 crc kubenswrapper[4835]: I0216 15:23:23.204412 4835 scope.go:117] "RemoveContainer" containerID="a6c0bf891ce44988620ab845cbf3a5103297dd1e5c005ccebdd0c98e7cc027c8" Feb 16 15:23:23 crc kubenswrapper[4835]: I0216 15:23:23.209710 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-txwlq"] Feb 16 15:23:23 crc kubenswrapper[4835]: I0216 15:23:23.386480 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ff8ca48-8641-46ab-9353-7a1d0c649acd" path="/var/lib/kubelet/pods/3ff8ca48-8641-46ab-9353-7a1d0c649acd/volumes" Feb 16 15:23:23 crc kubenswrapper[4835]: I0216 15:23:23.387897 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4455b323-7b78-42f7-ac34-5b74cd7ca45b" path="/var/lib/kubelet/pods/4455b323-7b78-42f7-ac34-5b74cd7ca45b/volumes" Feb 16 15:23:23 crc kubenswrapper[4835]: I0216 15:23:23.472733 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-54dd757795-lpkxx" Feb 16 15:23:24 crc kubenswrapper[4835]: I0216 15:23:24.206583 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhdbv"] Feb 16 15:23:24 crc kubenswrapper[4835]: I0216 15:23:24.206801 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nhdbv" podUID="e7ab49da-31a8-43fa-8580-9c3a90687163" containerName="registry-server" containerID="cri-o://7b5787ee2a60e991b819bb6fec2a4c22d666d5ca881c2e6a6f73877cd05252a7" gracePeriod=2 Feb 16 15:23:24 crc kubenswrapper[4835]: I0216 15:23:24.664309 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhdbv" Feb 16 15:23:24 crc kubenswrapper[4835]: I0216 15:23:24.802355 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxhvn\" (UniqueName: \"kubernetes.io/projected/e7ab49da-31a8-43fa-8580-9c3a90687163-kube-api-access-qxhvn\") pod \"e7ab49da-31a8-43fa-8580-9c3a90687163\" (UID: \"e7ab49da-31a8-43fa-8580-9c3a90687163\") " Feb 16 15:23:24 crc kubenswrapper[4835]: I0216 15:23:24.802429 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7ab49da-31a8-43fa-8580-9c3a90687163-catalog-content\") pod \"e7ab49da-31a8-43fa-8580-9c3a90687163\" (UID: \"e7ab49da-31a8-43fa-8580-9c3a90687163\") " Feb 16 15:23:24 crc kubenswrapper[4835]: I0216 15:23:24.802483 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7ab49da-31a8-43fa-8580-9c3a90687163-utilities\") pod \"e7ab49da-31a8-43fa-8580-9c3a90687163\" (UID: \"e7ab49da-31a8-43fa-8580-9c3a90687163\") " Feb 16 15:23:24 crc kubenswrapper[4835]: I0216 15:23:24.803450 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7ab49da-31a8-43fa-8580-9c3a90687163-utilities" (OuterVolumeSpecName: "utilities") pod "e7ab49da-31a8-43fa-8580-9c3a90687163" (UID: "e7ab49da-31a8-43fa-8580-9c3a90687163"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:23:24 crc kubenswrapper[4835]: I0216 15:23:24.811494 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7ab49da-31a8-43fa-8580-9c3a90687163-kube-api-access-qxhvn" (OuterVolumeSpecName: "kube-api-access-qxhvn") pod "e7ab49da-31a8-43fa-8580-9c3a90687163" (UID: "e7ab49da-31a8-43fa-8580-9c3a90687163"). InnerVolumeSpecName "kube-api-access-qxhvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:23:24 crc kubenswrapper[4835]: I0216 15:23:24.829191 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7ab49da-31a8-43fa-8580-9c3a90687163-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7ab49da-31a8-43fa-8580-9c3a90687163" (UID: "e7ab49da-31a8-43fa-8580-9c3a90687163"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:23:24 crc kubenswrapper[4835]: I0216 15:23:24.904822 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7ab49da-31a8-43fa-8580-9c3a90687163-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:23:24 crc kubenswrapper[4835]: I0216 15:23:24.905145 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7ab49da-31a8-43fa-8580-9c3a90687163-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:23:24 crc kubenswrapper[4835]: I0216 15:23:24.905160 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxhvn\" (UniqueName: \"kubernetes.io/projected/e7ab49da-31a8-43fa-8580-9c3a90687163-kube-api-access-qxhvn\") on node \"crc\" DevicePath \"\"" Feb 16 15:23:25 crc kubenswrapper[4835]: I0216 15:23:25.175462 4835 generic.go:334] "Generic (PLEG): container finished" podID="e7ab49da-31a8-43fa-8580-9c3a90687163" containerID="7b5787ee2a60e991b819bb6fec2a4c22d666d5ca881c2e6a6f73877cd05252a7" exitCode=0 Feb 16 15:23:25 crc kubenswrapper[4835]: I0216 15:23:25.175513 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhdbv" event={"ID":"e7ab49da-31a8-43fa-8580-9c3a90687163","Type":"ContainerDied","Data":"7b5787ee2a60e991b819bb6fec2a4c22d666d5ca881c2e6a6f73877cd05252a7"} Feb 16 15:23:25 crc kubenswrapper[4835]: I0216 15:23:25.175547 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhdbv" Feb 16 15:23:25 crc kubenswrapper[4835]: I0216 15:23:25.175572 4835 scope.go:117] "RemoveContainer" containerID="7b5787ee2a60e991b819bb6fec2a4c22d666d5ca881c2e6a6f73877cd05252a7" Feb 16 15:23:25 crc kubenswrapper[4835]: I0216 15:23:25.175560 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhdbv" event={"ID":"e7ab49da-31a8-43fa-8580-9c3a90687163","Type":"ContainerDied","Data":"fa4b81010f733b8969add2547a4903324c7fb8d170a23810de58f3d18f646600"} Feb 16 15:23:25 crc kubenswrapper[4835]: I0216 15:23:25.192213 4835 scope.go:117] "RemoveContainer" containerID="32fa276e0ad82500e14094f8be1de61ac34ccc43c12d45bae4c854d10927532d" Feb 16 15:23:25 crc kubenswrapper[4835]: I0216 15:23:25.206911 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhdbv"] Feb 16 15:23:25 crc kubenswrapper[4835]: I0216 15:23:25.212393 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhdbv"] Feb 16 15:23:25 crc kubenswrapper[4835]: I0216 15:23:25.228971 4835 scope.go:117] "RemoveContainer" containerID="37d3be22c27715ef50311d8e735dc6e05c9e5fbf1775b9b8ed6ba2d028439f89" Feb 16 15:23:25 crc kubenswrapper[4835]: I0216 15:23:25.256483 4835 scope.go:117] "RemoveContainer" containerID="7b5787ee2a60e991b819bb6fec2a4c22d666d5ca881c2e6a6f73877cd05252a7" Feb 16 15:23:25 crc kubenswrapper[4835]: E0216 15:23:25.256883 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b5787ee2a60e991b819bb6fec2a4c22d666d5ca881c2e6a6f73877cd05252a7\": container with ID starting with 7b5787ee2a60e991b819bb6fec2a4c22d666d5ca881c2e6a6f73877cd05252a7 not found: ID does not exist" containerID="7b5787ee2a60e991b819bb6fec2a4c22d666d5ca881c2e6a6f73877cd05252a7" Feb 16 15:23:25 crc kubenswrapper[4835]: I0216 15:23:25.256913 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b5787ee2a60e991b819bb6fec2a4c22d666d5ca881c2e6a6f73877cd05252a7"} err="failed to get container status \"7b5787ee2a60e991b819bb6fec2a4c22d666d5ca881c2e6a6f73877cd05252a7\": rpc error: code = NotFound desc = could not find container \"7b5787ee2a60e991b819bb6fec2a4c22d666d5ca881c2e6a6f73877cd05252a7\": container with ID starting with 7b5787ee2a60e991b819bb6fec2a4c22d666d5ca881c2e6a6f73877cd05252a7 not found: ID does not exist" Feb 16 15:23:25 crc kubenswrapper[4835]: I0216 15:23:25.256933 4835 scope.go:117] "RemoveContainer" containerID="32fa276e0ad82500e14094f8be1de61ac34ccc43c12d45bae4c854d10927532d" Feb 16 15:23:25 crc kubenswrapper[4835]: E0216 15:23:25.257369 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32fa276e0ad82500e14094f8be1de61ac34ccc43c12d45bae4c854d10927532d\": container with ID starting with 32fa276e0ad82500e14094f8be1de61ac34ccc43c12d45bae4c854d10927532d not found: ID does not exist" containerID="32fa276e0ad82500e14094f8be1de61ac34ccc43c12d45bae4c854d10927532d" Feb 16 15:23:25 crc kubenswrapper[4835]: I0216 15:23:25.257412 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32fa276e0ad82500e14094f8be1de61ac34ccc43c12d45bae4c854d10927532d"} err="failed to get container status \"32fa276e0ad82500e14094f8be1de61ac34ccc43c12d45bae4c854d10927532d\": rpc error: code = NotFound desc = could not find container \"32fa276e0ad82500e14094f8be1de61ac34ccc43c12d45bae4c854d10927532d\": container with ID starting with 32fa276e0ad82500e14094f8be1de61ac34ccc43c12d45bae4c854d10927532d not found: ID does not exist" Feb 16 15:23:25 crc kubenswrapper[4835]: I0216 15:23:25.257441 4835 scope.go:117] "RemoveContainer" containerID="37d3be22c27715ef50311d8e735dc6e05c9e5fbf1775b9b8ed6ba2d028439f89" Feb 16 15:23:25 crc kubenswrapper[4835]: E0216 15:23:25.257927 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37d3be22c27715ef50311d8e735dc6e05c9e5fbf1775b9b8ed6ba2d028439f89\": container with ID starting with 37d3be22c27715ef50311d8e735dc6e05c9e5fbf1775b9b8ed6ba2d028439f89 not found: ID does not exist" containerID="37d3be22c27715ef50311d8e735dc6e05c9e5fbf1775b9b8ed6ba2d028439f89" Feb 16 15:23:25 crc kubenswrapper[4835]: I0216 15:23:25.257954 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d3be22c27715ef50311d8e735dc6e05c9e5fbf1775b9b8ed6ba2d028439f89"} err="failed to get container status \"37d3be22c27715ef50311d8e735dc6e05c9e5fbf1775b9b8ed6ba2d028439f89\": rpc error: code = NotFound desc = could not find container \"37d3be22c27715ef50311d8e735dc6e05c9e5fbf1775b9b8ed6ba2d028439f89\": container with ID starting with 37d3be22c27715ef50311d8e735dc6e05c9e5fbf1775b9b8ed6ba2d028439f89 not found: ID does not exist" Feb 16 15:23:25 crc kubenswrapper[4835]: I0216 15:23:25.386993 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7ab49da-31a8-43fa-8580-9c3a90687163" path="/var/lib/kubelet/pods/e7ab49da-31a8-43fa-8580-9c3a90687163/volumes" Feb 16 15:23:26 crc kubenswrapper[4835]: I0216 15:23:26.481604 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-w9l9w" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.042087 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bxw5q"] Feb 16 15:23:43 crc kubenswrapper[4835]: E0216 15:23:43.042836 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4455b323-7b78-42f7-ac34-5b74cd7ca45b" containerName="extract-content" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.042848 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4455b323-7b78-42f7-ac34-5b74cd7ca45b" containerName="extract-content" Feb 16 15:23:43 crc kubenswrapper[4835]: E0216 15:23:43.042859 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ff8ca48-8641-46ab-9353-7a1d0c649acd" containerName="extract-utilities" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.042864 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff8ca48-8641-46ab-9353-7a1d0c649acd" containerName="extract-utilities" Feb 16 15:23:43 crc kubenswrapper[4835]: E0216 15:23:43.042874 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4455b323-7b78-42f7-ac34-5b74cd7ca45b" containerName="extract-utilities" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.042880 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4455b323-7b78-42f7-ac34-5b74cd7ca45b" containerName="extract-utilities" Feb 16 15:23:43 crc kubenswrapper[4835]: E0216 15:23:43.042890 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4455b323-7b78-42f7-ac34-5b74cd7ca45b" containerName="registry-server" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.042896 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4455b323-7b78-42f7-ac34-5b74cd7ca45b" containerName="registry-server" Feb 16 15:23:43 crc kubenswrapper[4835]: E0216 15:23:43.042906 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7ab49da-31a8-43fa-8580-9c3a90687163" containerName="extract-content" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.042912 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ab49da-31a8-43fa-8580-9c3a90687163" containerName="extract-content" Feb 16 15:23:43 crc kubenswrapper[4835]: E0216 15:23:43.042923 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ff8ca48-8641-46ab-9353-7a1d0c649acd" containerName="extract-content" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.042929 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff8ca48-8641-46ab-9353-7a1d0c649acd" containerName="extract-content" Feb 16 15:23:43 crc kubenswrapper[4835]: E0216 15:23:43.042939 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ff8ca48-8641-46ab-9353-7a1d0c649acd" containerName="registry-server" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.042945 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff8ca48-8641-46ab-9353-7a1d0c649acd" containerName="registry-server" Feb 16 15:23:43 crc kubenswrapper[4835]: E0216 15:23:43.042956 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7ab49da-31a8-43fa-8580-9c3a90687163" containerName="registry-server" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.042962 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ab49da-31a8-43fa-8580-9c3a90687163" containerName="registry-server" Feb 16 15:23:43 crc kubenswrapper[4835]: E0216 15:23:43.042973 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7ab49da-31a8-43fa-8580-9c3a90687163" containerName="extract-utilities" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.042978 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ab49da-31a8-43fa-8580-9c3a90687163" containerName="extract-utilities" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.043113 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7ab49da-31a8-43fa-8580-9c3a90687163" containerName="registry-server" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.043134 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ff8ca48-8641-46ab-9353-7a1d0c649acd" containerName="registry-server" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.043145 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="4455b323-7b78-42f7-ac34-5b74cd7ca45b" containerName="registry-server" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.043984 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bxw5q" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.046994 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.047344 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-zqkhj" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.047495 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.048633 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.065931 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bxw5q"] Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.101630 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ct24m"] Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.102945 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ct24m" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.104820 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.129540 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ct24m"] Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.152671 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsm58\" (UniqueName: \"kubernetes.io/projected/fc7fe9d4-0930-4bcb-a675-6dc2f58c4168-kube-api-access-gsm58\") pod \"dnsmasq-dns-675f4bcbfc-bxw5q\" (UID: \"fc7fe9d4-0930-4bcb-a675-6dc2f58c4168\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bxw5q" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.152726 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c94cc9c3-bdc5-4b04-8535-c4f006231943-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ct24m\" (UID: \"c94cc9c3-bdc5-4b04-8535-c4f006231943\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ct24m" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.153027 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc7fe9d4-0930-4bcb-a675-6dc2f58c4168-config\") pod \"dnsmasq-dns-675f4bcbfc-bxw5q\" (UID: \"fc7fe9d4-0930-4bcb-a675-6dc2f58c4168\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bxw5q" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.153154 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npmn5\" (UniqueName: \"kubernetes.io/projected/c94cc9c3-bdc5-4b04-8535-c4f006231943-kube-api-access-npmn5\") pod \"dnsmasq-dns-78dd6ddcc-ct24m\" (UID: \"c94cc9c3-bdc5-4b04-8535-c4f006231943\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ct24m" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.153324 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c94cc9c3-bdc5-4b04-8535-c4f006231943-config\") pod \"dnsmasq-dns-78dd6ddcc-ct24m\" (UID: \"c94cc9c3-bdc5-4b04-8535-c4f006231943\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ct24m" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.254139 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsm58\" (UniqueName: \"kubernetes.io/projected/fc7fe9d4-0930-4bcb-a675-6dc2f58c4168-kube-api-access-gsm58\") pod \"dnsmasq-dns-675f4bcbfc-bxw5q\" (UID: \"fc7fe9d4-0930-4bcb-a675-6dc2f58c4168\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bxw5q" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.254202 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c94cc9c3-bdc5-4b04-8535-c4f006231943-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ct24m\" (UID: \"c94cc9c3-bdc5-4b04-8535-c4f006231943\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ct24m" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.254231 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc7fe9d4-0930-4bcb-a675-6dc2f58c4168-config\") pod \"dnsmasq-dns-675f4bcbfc-bxw5q\" (UID: \"fc7fe9d4-0930-4bcb-a675-6dc2f58c4168\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bxw5q" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.254266 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npmn5\" (UniqueName: \"kubernetes.io/projected/c94cc9c3-bdc5-4b04-8535-c4f006231943-kube-api-access-npmn5\") pod \"dnsmasq-dns-78dd6ddcc-ct24m\" (UID: \"c94cc9c3-bdc5-4b04-8535-c4f006231943\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ct24m" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.254324 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c94cc9c3-bdc5-4b04-8535-c4f006231943-config\") pod \"dnsmasq-dns-78dd6ddcc-ct24m\" (UID: \"c94cc9c3-bdc5-4b04-8535-c4f006231943\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ct24m" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.255149 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc7fe9d4-0930-4bcb-a675-6dc2f58c4168-config\") pod \"dnsmasq-dns-675f4bcbfc-bxw5q\" (UID: \"fc7fe9d4-0930-4bcb-a675-6dc2f58c4168\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bxw5q" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.255205 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c94cc9c3-bdc5-4b04-8535-c4f006231943-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ct24m\" (UID: \"c94cc9c3-bdc5-4b04-8535-c4f006231943\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ct24m" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.255251 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c94cc9c3-bdc5-4b04-8535-c4f006231943-config\") pod \"dnsmasq-dns-78dd6ddcc-ct24m\" (UID: \"c94cc9c3-bdc5-4b04-8535-c4f006231943\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ct24m" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.272412 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npmn5\" (UniqueName: \"kubernetes.io/projected/c94cc9c3-bdc5-4b04-8535-c4f006231943-kube-api-access-npmn5\") pod \"dnsmasq-dns-78dd6ddcc-ct24m\" (UID: \"c94cc9c3-bdc5-4b04-8535-c4f006231943\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ct24m" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.272910 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsm58\" (UniqueName: \"kubernetes.io/projected/fc7fe9d4-0930-4bcb-a675-6dc2f58c4168-kube-api-access-gsm58\") pod \"dnsmasq-dns-675f4bcbfc-bxw5q\" (UID: \"fc7fe9d4-0930-4bcb-a675-6dc2f58c4168\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bxw5q" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.362504 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bxw5q" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.427235 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ct24m" Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.848178 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bxw5q"] Feb 16 15:23:43 crc kubenswrapper[4835]: W0216 15:23:43.852562 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc7fe9d4_0930_4bcb_a675_6dc2f58c4168.slice/crio-f475c18d9efd754dc221a318c04e2304987e12dc1ee3cc88cd2e5cf293ff0d07 WatchSource:0}: Error finding container f475c18d9efd754dc221a318c04e2304987e12dc1ee3cc88cd2e5cf293ff0d07: Status 404 returned error can't find the container with id f475c18d9efd754dc221a318c04e2304987e12dc1ee3cc88cd2e5cf293ff0d07 Feb 16 15:23:43 crc kubenswrapper[4835]: W0216 15:23:43.902641 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc94cc9c3_bdc5_4b04_8535_c4f006231943.slice/crio-7e2d163da5f5d2d02d13deffff0bced3dbce7fb9141b433add0b87b22d0b950e WatchSource:0}: Error finding container 7e2d163da5f5d2d02d13deffff0bced3dbce7fb9141b433add0b87b22d0b950e: Status 404 returned error can't find the container with id 7e2d163da5f5d2d02d13deffff0bced3dbce7fb9141b433add0b87b22d0b950e Feb 16 15:23:43 crc kubenswrapper[4835]: I0216 15:23:43.903994 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ct24m"] Feb 16 15:23:44 crc kubenswrapper[4835]: I0216 15:23:44.311464 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bxw5q" event={"ID":"fc7fe9d4-0930-4bcb-a675-6dc2f58c4168","Type":"ContainerStarted","Data":"f475c18d9efd754dc221a318c04e2304987e12dc1ee3cc88cd2e5cf293ff0d07"} Feb 16 15:23:44 crc kubenswrapper[4835]: I0216 15:23:44.312817 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ct24m" event={"ID":"c94cc9c3-bdc5-4b04-8535-c4f006231943","Type":"ContainerStarted","Data":"7e2d163da5f5d2d02d13deffff0bced3dbce7fb9141b433add0b87b22d0b950e"} Feb 16 15:23:45 crc kubenswrapper[4835]: I0216 15:23:45.810884 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bxw5q"] Feb 16 15:23:45 crc kubenswrapper[4835]: I0216 15:23:45.834754 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jn278"] Feb 16 15:23:45 crc kubenswrapper[4835]: I0216 15:23:45.836201 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jn278" Feb 16 15:23:45 crc kubenswrapper[4835]: I0216 15:23:45.842019 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jn278"] Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.005063 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kqqx\" (UniqueName: \"kubernetes.io/projected/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-kube-api-access-4kqqx\") pod \"dnsmasq-dns-666b6646f7-jn278\" (UID: \"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa\") " pod="openstack/dnsmasq-dns-666b6646f7-jn278" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.005229 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-dns-svc\") pod \"dnsmasq-dns-666b6646f7-jn278\" (UID: \"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa\") " pod="openstack/dnsmasq-dns-666b6646f7-jn278" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.005252 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-config\") pod \"dnsmasq-dns-666b6646f7-jn278\" (UID: \"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa\") " pod="openstack/dnsmasq-dns-666b6646f7-jn278" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.094775 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ct24m"] Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.107171 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kqqx\" (UniqueName: \"kubernetes.io/projected/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-kube-api-access-4kqqx\") pod \"dnsmasq-dns-666b6646f7-jn278\" (UID: \"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa\") " pod="openstack/dnsmasq-dns-666b6646f7-jn278" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.107279 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-dns-svc\") pod \"dnsmasq-dns-666b6646f7-jn278\" (UID: \"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa\") " pod="openstack/dnsmasq-dns-666b6646f7-jn278" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.107302 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-config\") pod \"dnsmasq-dns-666b6646f7-jn278\" (UID: \"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa\") " pod="openstack/dnsmasq-dns-666b6646f7-jn278" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.108151 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-config\") pod \"dnsmasq-dns-666b6646f7-jn278\" (UID: \"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa\") " pod="openstack/dnsmasq-dns-666b6646f7-jn278" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.109844 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-dns-svc\") pod \"dnsmasq-dns-666b6646f7-jn278\" (UID: \"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa\") " pod="openstack/dnsmasq-dns-666b6646f7-jn278" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.124102 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v7x8x"] Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.125579 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.129620 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kqqx\" (UniqueName: \"kubernetes.io/projected/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-kube-api-access-4kqqx\") pod \"dnsmasq-dns-666b6646f7-jn278\" (UID: \"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa\") " pod="openstack/dnsmasq-dns-666b6646f7-jn278" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.145299 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v7x8x"] Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.193381 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jn278" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.309884 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hplmh\" (UniqueName: \"kubernetes.io/projected/b48c6a8d-4722-4111-8c27-82ea2f235c24-kube-api-access-hplmh\") pod \"dnsmasq-dns-57d769cc4f-v7x8x\" (UID: \"b48c6a8d-4722-4111-8c27-82ea2f235c24\") " pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.309995 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b48c6a8d-4722-4111-8c27-82ea2f235c24-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-v7x8x\" (UID: \"b48c6a8d-4722-4111-8c27-82ea2f235c24\") " pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.310796 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b48c6a8d-4722-4111-8c27-82ea2f235c24-config\") pod \"dnsmasq-dns-57d769cc4f-v7x8x\" (UID: \"b48c6a8d-4722-4111-8c27-82ea2f235c24\") " pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.413834 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b48c6a8d-4722-4111-8c27-82ea2f235c24-config\") pod \"dnsmasq-dns-57d769cc4f-v7x8x\" (UID: \"b48c6a8d-4722-4111-8c27-82ea2f235c24\") " pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.413889 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hplmh\" (UniqueName: \"kubernetes.io/projected/b48c6a8d-4722-4111-8c27-82ea2f235c24-kube-api-access-hplmh\") pod \"dnsmasq-dns-57d769cc4f-v7x8x\" (UID: \"b48c6a8d-4722-4111-8c27-82ea2f235c24\") " pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.414035 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b48c6a8d-4722-4111-8c27-82ea2f235c24-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-v7x8x\" (UID: \"b48c6a8d-4722-4111-8c27-82ea2f235c24\") " pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.415845 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b48c6a8d-4722-4111-8c27-82ea2f235c24-config\") pod \"dnsmasq-dns-57d769cc4f-v7x8x\" (UID: \"b48c6a8d-4722-4111-8c27-82ea2f235c24\") " pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.416825 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b48c6a8d-4722-4111-8c27-82ea2f235c24-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-v7x8x\" (UID: \"b48c6a8d-4722-4111-8c27-82ea2f235c24\") " pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.435872 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hplmh\" (UniqueName: \"kubernetes.io/projected/b48c6a8d-4722-4111-8c27-82ea2f235c24-kube-api-access-hplmh\") pod \"dnsmasq-dns-57d769cc4f-v7x8x\" (UID: \"b48c6a8d-4722-4111-8c27-82ea2f235c24\") " pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.475398 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.705052 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jn278"] Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.995280 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.996944 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.999478 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.999673 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.999834 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 15:23:46 crc kubenswrapper[4835]: I0216 15:23:46.999954 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.000456 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9x64g" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.001094 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.006613 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.010202 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.023734 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v7x8x"] Feb 16 15:23:47 crc kubenswrapper[4835]: W0216 15:23:47.036323 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb48c6a8d_4722_4111_8c27_82ea2f235c24.slice/crio-a9def1ca71ba7f568edad5370b9e2669d17a410f2e710673c4e48bf406512d6c WatchSource:0}: Error finding container a9def1ca71ba7f568edad5370b9e2669d17a410f2e710673c4e48bf406512d6c: Status 404 returned error can't find the container with id a9def1ca71ba7f568edad5370b9e2669d17a410f2e710673c4e48bf406512d6c Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.131419 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.131480 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.131545 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-917242d9-8ace-4461-8eda-99e70d26ca84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-917242d9-8ace-4461-8eda-99e70d26ca84\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.131562 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.131581 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m884j\" (UniqueName: \"kubernetes.io/projected/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-kube-api-access-m884j\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.131618 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.131649 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.131664 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-server-conf\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.131684 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-config-data\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.131719 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-pod-info\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.131737 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.232505 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.232826 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-917242d9-8ace-4461-8eda-99e70d26ca84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-917242d9-8ace-4461-8eda-99e70d26ca84\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.232850 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m884j\" (UniqueName: \"kubernetes.io/projected/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-kube-api-access-m884j\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.232883 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.232921 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.232937 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-server-conf\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.232956 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-config-data\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.232985 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-pod-info\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.233000 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.233022 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.233048 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.233081 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.233470 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.234308 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-server-conf\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.235011 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.235730 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-config-data\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.239735 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.239773 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-917242d9-8ace-4461-8eda-99e70d26ca84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-917242d9-8ace-4461-8eda-99e70d26ca84\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c316a1784014b19741a30eda42ee576705d7f2ea860d3ceb5d9216f83fcc6ee8/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.239996 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.240400 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.240500 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-pod-info\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.241410 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.253891 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m884j\" (UniqueName: \"kubernetes.io/projected/02aa07ee-7fa8-40e8-bd6a-2c98dc10edda-kube-api-access-m884j\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.280265 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.282638 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.295895 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.299553 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-917242d9-8ace-4461-8eda-99e70d26ca84\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-917242d9-8ace-4461-8eda-99e70d26ca84\") pod \"rabbitmq-server-0\" (UID: \"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda\") " pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.306097 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.307773 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.307923 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.308042 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.308519 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.308794 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hsdtj" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.308840 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.340093 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.436052 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c673663-5be6-4ed4-b2b5-9a80e72391c6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.436110 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c673663-5be6-4ed4-b2b5-9a80e72391c6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.436172 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c673663-5be6-4ed4-b2b5-9a80e72391c6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.436217 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c673663-5be6-4ed4-b2b5-9a80e72391c6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.436280 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c673663-5be6-4ed4-b2b5-9a80e72391c6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.436313 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-72207153-5f31-4dfb-ac01-34c35c132988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72207153-5f31-4dfb-ac01-34c35c132988\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.436342 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c673663-5be6-4ed4-b2b5-9a80e72391c6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.436384 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c673663-5be6-4ed4-b2b5-9a80e72391c6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.436419 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl2qg\" (UniqueName: \"kubernetes.io/projected/9c673663-5be6-4ed4-b2b5-9a80e72391c6-kube-api-access-dl2qg\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.436449 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c673663-5be6-4ed4-b2b5-9a80e72391c6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.436479 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c673663-5be6-4ed4-b2b5-9a80e72391c6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.440440 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jn278" event={"ID":"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa","Type":"ContainerStarted","Data":"aae7382e5b9528e267db8dbbb3c2d1077903a97286bd3a450cf41287ca16aac4"} Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.440479 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" event={"ID":"b48c6a8d-4722-4111-8c27-82ea2f235c24","Type":"ContainerStarted","Data":"a9def1ca71ba7f568edad5370b9e2669d17a410f2e710673c4e48bf406512d6c"} Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.540260 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c673663-5be6-4ed4-b2b5-9a80e72391c6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.540311 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c673663-5be6-4ed4-b2b5-9a80e72391c6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.540346 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c673663-5be6-4ed4-b2b5-9a80e72391c6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.540375 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c673663-5be6-4ed4-b2b5-9a80e72391c6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.540415 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c673663-5be6-4ed4-b2b5-9a80e72391c6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.540433 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-72207153-5f31-4dfb-ac01-34c35c132988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72207153-5f31-4dfb-ac01-34c35c132988\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.540482 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c673663-5be6-4ed4-b2b5-9a80e72391c6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.540513 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c673663-5be6-4ed4-b2b5-9a80e72391c6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.540544 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl2qg\" (UniqueName: \"kubernetes.io/projected/9c673663-5be6-4ed4-b2b5-9a80e72391c6-kube-api-access-dl2qg\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.540559 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c673663-5be6-4ed4-b2b5-9a80e72391c6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.540582 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c673663-5be6-4ed4-b2b5-9a80e72391c6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.541027 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c673663-5be6-4ed4-b2b5-9a80e72391c6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.544006 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c673663-5be6-4ed4-b2b5-9a80e72391c6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.544463 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c673663-5be6-4ed4-b2b5-9a80e72391c6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.546007 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c673663-5be6-4ed4-b2b5-9a80e72391c6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.547045 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c673663-5be6-4ed4-b2b5-9a80e72391c6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.547431 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c673663-5be6-4ed4-b2b5-9a80e72391c6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.548319 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c673663-5be6-4ed4-b2b5-9a80e72391c6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.550064 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c673663-5be6-4ed4-b2b5-9a80e72391c6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.551144 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c673663-5be6-4ed4-b2b5-9a80e72391c6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.553070 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.553118 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-72207153-5f31-4dfb-ac01-34c35c132988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72207153-5f31-4dfb-ac01-34c35c132988\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/807659794012dca82557f4a4bbeee908a94ef7547b16f493fb65c9b3b69959d9/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.573398 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl2qg\" (UniqueName: \"kubernetes.io/projected/9c673663-5be6-4ed4-b2b5-9a80e72391c6-kube-api-access-dl2qg\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.718304 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-72207153-5f31-4dfb-ac01-34c35c132988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-72207153-5f31-4dfb-ac01-34c35c132988\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c673663-5be6-4ed4-b2b5-9a80e72391c6\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:47 crc kubenswrapper[4835]: I0216 15:23:47.953768 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.003248 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.458256 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.459625 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.462338 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.462476 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-8lvr6" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.462480 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.464599 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.470406 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.475342 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.570766 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r9ck\" (UniqueName: \"kubernetes.io/projected/32e52175-bb63-4076-a7af-4cf969b90ec6-kube-api-access-5r9ck\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.571016 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32e52175-bb63-4076-a7af-4cf969b90ec6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.571790 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/32e52175-bb63-4076-a7af-4cf969b90ec6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.571878 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/32e52175-bb63-4076-a7af-4cf969b90ec6-config-data-default\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.571959 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32e52175-bb63-4076-a7af-4cf969b90ec6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.572028 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32e52175-bb63-4076-a7af-4cf969b90ec6-kolla-config\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.572049 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2b81092e-3c03-461b-af6b-ebfd77aa1752\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b81092e-3c03-461b-af6b-ebfd77aa1752\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.572068 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e52175-bb63-4076-a7af-4cf969b90ec6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.673059 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32e52175-bb63-4076-a7af-4cf969b90ec6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.673183 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/32e52175-bb63-4076-a7af-4cf969b90ec6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.673213 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/32e52175-bb63-4076-a7af-4cf969b90ec6-config-data-default\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.673230 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32e52175-bb63-4076-a7af-4cf969b90ec6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.673271 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2b81092e-3c03-461b-af6b-ebfd77aa1752\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b81092e-3c03-461b-af6b-ebfd77aa1752\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.673290 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32e52175-bb63-4076-a7af-4cf969b90ec6-kolla-config\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.673306 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e52175-bb63-4076-a7af-4cf969b90ec6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.673361 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r9ck\" (UniqueName: \"kubernetes.io/projected/32e52175-bb63-4076-a7af-4cf969b90ec6-kube-api-access-5r9ck\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.680695 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/32e52175-bb63-4076-a7af-4cf969b90ec6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.681333 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/32e52175-bb63-4076-a7af-4cf969b90ec6-config-data-default\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.684744 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32e52175-bb63-4076-a7af-4cf969b90ec6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.684967 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e52175-bb63-4076-a7af-4cf969b90ec6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.685551 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/32e52175-bb63-4076-a7af-4cf969b90ec6-kolla-config\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.687194 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.687214 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2b81092e-3c03-461b-af6b-ebfd77aa1752\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b81092e-3c03-461b-af6b-ebfd77aa1752\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9003d8879d921682cfd427f6ad26c545f580bb1c942ae4e26e83342aa54ccdc7/globalmount\"" pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.697816 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r9ck\" (UniqueName: \"kubernetes.io/projected/32e52175-bb63-4076-a7af-4cf969b90ec6-kube-api-access-5r9ck\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.724069 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32e52175-bb63-4076-a7af-4cf969b90ec6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.745679 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2b81092e-3c03-461b-af6b-ebfd77aa1752\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2b81092e-3c03-461b-af6b-ebfd77aa1752\") pod \"openstack-galera-0\" (UID: \"32e52175-bb63-4076-a7af-4cf969b90ec6\") " pod="openstack/openstack-galera-0" Feb 16 15:23:48 crc kubenswrapper[4835]: I0216 15:23:48.784781 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.754316 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.755713 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.758224 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.758602 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.762443 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-fr7x6" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.762471 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.784542 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.889654 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/752083aa-579e-46dc-addb-b923b394b393-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.890042 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/752083aa-579e-46dc-addb-b923b394b393-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.890146 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbhcs\" (UniqueName: \"kubernetes.io/projected/752083aa-579e-46dc-addb-b923b394b393-kube-api-access-gbhcs\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.890215 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/752083aa-579e-46dc-addb-b923b394b393-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.890313 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/752083aa-579e-46dc-addb-b923b394b393-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.890340 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/752083aa-579e-46dc-addb-b923b394b393-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.890463 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/752083aa-579e-46dc-addb-b923b394b393-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.890561 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c49d12be-f963-4664-8793-ce116024beed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49d12be-f963-4664-8793-ce116024beed\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.993263 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c49d12be-f963-4664-8793-ce116024beed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49d12be-f963-4664-8793-ce116024beed\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.994338 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/752083aa-579e-46dc-addb-b923b394b393-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.994390 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/752083aa-579e-46dc-addb-b923b394b393-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.994422 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbhcs\" (UniqueName: \"kubernetes.io/projected/752083aa-579e-46dc-addb-b923b394b393-kube-api-access-gbhcs\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.994450 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/752083aa-579e-46dc-addb-b923b394b393-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.994484 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/752083aa-579e-46dc-addb-b923b394b393-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.994500 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/752083aa-579e-46dc-addb-b923b394b393-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.994587 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/752083aa-579e-46dc-addb-b923b394b393-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.995205 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/752083aa-579e-46dc-addb-b923b394b393-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.997293 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/752083aa-579e-46dc-addb-b923b394b393-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:49 crc kubenswrapper[4835]: I0216 15:23:49.998028 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/752083aa-579e-46dc-addb-b923b394b393-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:49.998548 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/752083aa-579e-46dc-addb-b923b394b393-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.002148 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/752083aa-579e-46dc-addb-b923b394b393-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.002376 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/752083aa-579e-46dc-addb-b923b394b393-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.005719 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.005942 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.006000 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c49d12be-f963-4664-8793-ce116024beed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49d12be-f963-4664-8793-ce116024beed\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/28d359b98ffe761b248d16ce6f72c7c794c82a5510e97ba6d20b25ca26e164f6/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.006680 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.010843 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.011091 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-bqkr8" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.011173 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.014343 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbhcs\" (UniqueName: \"kubernetes.io/projected/752083aa-579e-46dc-addb-b923b394b393-kube-api-access-gbhcs\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.043662 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.056716 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c49d12be-f963-4664-8793-ce116024beed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49d12be-f963-4664-8793-ce116024beed\") pod \"openstack-cell1-galera-0\" (UID: \"752083aa-579e-46dc-addb-b923b394b393\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.089205 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.097451 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8da5ef68-09c6-4938-99a8-b728f03b4d14-kolla-config\") pod \"memcached-0\" (UID: \"8da5ef68-09c6-4938-99a8-b728f03b4d14\") " pod="openstack/memcached-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.097499 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r7k2\" (UniqueName: \"kubernetes.io/projected/8da5ef68-09c6-4938-99a8-b728f03b4d14-kube-api-access-9r7k2\") pod \"memcached-0\" (UID: \"8da5ef68-09c6-4938-99a8-b728f03b4d14\") " pod="openstack/memcached-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.097546 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8da5ef68-09c6-4938-99a8-b728f03b4d14-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8da5ef68-09c6-4938-99a8-b728f03b4d14\") " pod="openstack/memcached-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.097622 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8da5ef68-09c6-4938-99a8-b728f03b4d14-config-data\") pod \"memcached-0\" (UID: \"8da5ef68-09c6-4938-99a8-b728f03b4d14\") " pod="openstack/memcached-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.097717 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8da5ef68-09c6-4938-99a8-b728f03b4d14-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8da5ef68-09c6-4938-99a8-b728f03b4d14\") " pod="openstack/memcached-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.202438 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8da5ef68-09c6-4938-99a8-b728f03b4d14-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8da5ef68-09c6-4938-99a8-b728f03b4d14\") " pod="openstack/memcached-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.202578 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8da5ef68-09c6-4938-99a8-b728f03b4d14-config-data\") pod \"memcached-0\" (UID: \"8da5ef68-09c6-4938-99a8-b728f03b4d14\") " pod="openstack/memcached-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.202632 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8da5ef68-09c6-4938-99a8-b728f03b4d14-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8da5ef68-09c6-4938-99a8-b728f03b4d14\") " pod="openstack/memcached-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.202659 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8da5ef68-09c6-4938-99a8-b728f03b4d14-kolla-config\") pod \"memcached-0\" (UID: \"8da5ef68-09c6-4938-99a8-b728f03b4d14\") " pod="openstack/memcached-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.202683 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r7k2\" (UniqueName: \"kubernetes.io/projected/8da5ef68-09c6-4938-99a8-b728f03b4d14-kube-api-access-9r7k2\") pod \"memcached-0\" (UID: \"8da5ef68-09c6-4938-99a8-b728f03b4d14\") " pod="openstack/memcached-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.203809 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8da5ef68-09c6-4938-99a8-b728f03b4d14-kolla-config\") pod \"memcached-0\" (UID: \"8da5ef68-09c6-4938-99a8-b728f03b4d14\") " pod="openstack/memcached-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.204012 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8da5ef68-09c6-4938-99a8-b728f03b4d14-config-data\") pod \"memcached-0\" (UID: \"8da5ef68-09c6-4938-99a8-b728f03b4d14\") " pod="openstack/memcached-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.209015 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8da5ef68-09c6-4938-99a8-b728f03b4d14-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8da5ef68-09c6-4938-99a8-b728f03b4d14\") " pod="openstack/memcached-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.217059 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8da5ef68-09c6-4938-99a8-b728f03b4d14-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8da5ef68-09c6-4938-99a8-b728f03b4d14\") " pod="openstack/memcached-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.232369 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r7k2\" (UniqueName: \"kubernetes.io/projected/8da5ef68-09c6-4938-99a8-b728f03b4d14-kube-api-access-9r7k2\") pod \"memcached-0\" (UID: \"8da5ef68-09c6-4938-99a8-b728f03b4d14\") " pod="openstack/memcached-0" Feb 16 15:23:50 crc kubenswrapper[4835]: I0216 15:23:50.389681 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 15:23:52 crc kubenswrapper[4835]: W0216 15:23:52.233185 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02aa07ee_7fa8_40e8_bd6a_2c98dc10edda.slice/crio-88e9caf50b0318b76f2fa4cc38e4b31405f973bae2f94b3ba203958750a58a6c WatchSource:0}: Error finding container 88e9caf50b0318b76f2fa4cc38e4b31405f973bae2f94b3ba203958750a58a6c: Status 404 returned error can't find the container with id 88e9caf50b0318b76f2fa4cc38e4b31405f973bae2f94b3ba203958750a58a6c Feb 16 15:23:52 crc kubenswrapper[4835]: I0216 15:23:52.368411 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:23:52 crc kubenswrapper[4835]: I0216 15:23:52.379048 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:23:52 crc kubenswrapper[4835]: I0216 15:23:52.378952 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:23:52 crc kubenswrapper[4835]: I0216 15:23:52.383183 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-wbkc5" Feb 16 15:23:52 crc kubenswrapper[4835]: I0216 15:23:52.442312 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6tfd\" (UniqueName: \"kubernetes.io/projected/117011cd-1ad8-4aff-b5d4-49bce3381f02-kube-api-access-x6tfd\") pod \"kube-state-metrics-0\" (UID: \"117011cd-1ad8-4aff-b5d4-49bce3381f02\") " pod="openstack/kube-state-metrics-0" Feb 16 15:23:52 crc kubenswrapper[4835]: I0216 15:23:52.490288 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda","Type":"ContainerStarted","Data":"88e9caf50b0318b76f2fa4cc38e4b31405f973bae2f94b3ba203958750a58a6c"} Feb 16 15:23:52 crc kubenswrapper[4835]: I0216 15:23:52.544451 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6tfd\" (UniqueName: \"kubernetes.io/projected/117011cd-1ad8-4aff-b5d4-49bce3381f02-kube-api-access-x6tfd\") pod \"kube-state-metrics-0\" (UID: \"117011cd-1ad8-4aff-b5d4-49bce3381f02\") " pod="openstack/kube-state-metrics-0" Feb 16 15:23:52 crc kubenswrapper[4835]: I0216 15:23:52.577395 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6tfd\" (UniqueName: \"kubernetes.io/projected/117011cd-1ad8-4aff-b5d4-49bce3381f02-kube-api-access-x6tfd\") pod \"kube-state-metrics-0\" (UID: \"117011cd-1ad8-4aff-b5d4-49bce3381f02\") " pod="openstack/kube-state-metrics-0" Feb 16 15:23:52 crc kubenswrapper[4835]: I0216 15:23:52.708019 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.004740 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.006374 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.009124 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.009354 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.009472 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.009637 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.011148 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-fbwvl" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.031638 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.153971 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcqwk\" (UniqueName: \"kubernetes.io/projected/23555de7-4851-4730-b8b3-9d788622420a-kube-api-access-hcqwk\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.154019 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/23555de7-4851-4730-b8b3-9d788622420a-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.154078 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/23555de7-4851-4730-b8b3-9d788622420a-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.154102 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/23555de7-4851-4730-b8b3-9d788622420a-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.154134 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/23555de7-4851-4730-b8b3-9d788622420a-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.154227 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/23555de7-4851-4730-b8b3-9d788622420a-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.154256 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/23555de7-4851-4730-b8b3-9d788622420a-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.255853 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/23555de7-4851-4730-b8b3-9d788622420a-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.255916 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/23555de7-4851-4730-b8b3-9d788622420a-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.255961 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/23555de7-4851-4730-b8b3-9d788622420a-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.255981 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcqwk\" (UniqueName: \"kubernetes.io/projected/23555de7-4851-4730-b8b3-9d788622420a-kube-api-access-hcqwk\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.256067 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/23555de7-4851-4730-b8b3-9d788622420a-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.256089 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/23555de7-4851-4730-b8b3-9d788622420a-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.256133 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/23555de7-4851-4730-b8b3-9d788622420a-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.257740 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/23555de7-4851-4730-b8b3-9d788622420a-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.262129 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/23555de7-4851-4730-b8b3-9d788622420a-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.272515 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcqwk\" (UniqueName: \"kubernetes.io/projected/23555de7-4851-4730-b8b3-9d788622420a-kube-api-access-hcqwk\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.276260 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/23555de7-4851-4730-b8b3-9d788622420a-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.277448 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/23555de7-4851-4730-b8b3-9d788622420a-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.277869 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/23555de7-4851-4730-b8b3-9d788622420a-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.281919 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/23555de7-4851-4730-b8b3-9d788622420a-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"23555de7-4851-4730-b8b3-9d788622420a\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.329839 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.549141 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.550921 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.554771 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.554824 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.554881 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-gbtvz" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.562034 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.562326 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.562436 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.562471 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.562498 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.577570 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.662581 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.662643 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e2800ecb-4ec5-4930-a820-d9680894ad21-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.662667 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e2800ecb-4ec5-4930-a820-d9680894ad21-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.662701 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-config\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.662758 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.662786 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.662810 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mskpg\" (UniqueName: \"kubernetes.io/projected/e2800ecb-4ec5-4930-a820-d9680894ad21-kube-api-access-mskpg\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.663135 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.663263 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.663343 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.764967 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.765243 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e2800ecb-4ec5-4930-a820-d9680894ad21-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.765354 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e2800ecb-4ec5-4930-a820-d9680894ad21-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.765445 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-config\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.765571 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.765670 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.765753 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mskpg\" (UniqueName: \"kubernetes.io/projected/e2800ecb-4ec5-4930-a820-d9680894ad21-kube-api-access-mskpg\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.765839 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.765961 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.766062 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.766466 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.766595 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.766925 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.768606 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e2800ecb-4ec5-4930-a820-d9680894ad21-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.769503 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.769818 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-config\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.775156 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.775329 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e2800ecb-4ec5-4930-a820-d9680894ad21-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.775703 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.775730 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5d10da0f3db55069b2d5c8d1b5275ce7c3d76b215aa95646bfe310a7bc72f24b/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.784039 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mskpg\" (UniqueName: \"kubernetes.io/projected/e2800ecb-4ec5-4930-a820-d9680894ad21-kube-api-access-mskpg\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.800373 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\") pod \"prometheus-metric-storage-0\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:53 crc kubenswrapper[4835]: I0216 15:23:53.873980 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:23:55 crc kubenswrapper[4835]: I0216 15:23:55.862054 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-st4vx"] Feb 16 15:23:55 crc kubenswrapper[4835]: I0216 15:23:55.863294 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-st4vx" Feb 16 15:23:55 crc kubenswrapper[4835]: I0216 15:23:55.867587 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 15:23:55 crc kubenswrapper[4835]: I0216 15:23:55.870492 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 15:23:55 crc kubenswrapper[4835]: I0216 15:23:55.875245 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-dvkf8" Feb 16 15:23:55 crc kubenswrapper[4835]: I0216 15:23:55.897078 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-st4vx"] Feb 16 15:23:55 crc kubenswrapper[4835]: I0216 15:23:55.922514 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-kxl4t"] Feb 16 15:23:55 crc kubenswrapper[4835]: I0216 15:23:55.926820 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:55 crc kubenswrapper[4835]: I0216 15:23:55.966610 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kxl4t"] Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.001106 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2efbff9d-b303-430c-b06c-36b79284a3f1-var-run\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.001154 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-var-lib\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.001173 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2efbff9d-b303-430c-b06c-36b79284a3f1-var-log-ovn\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.001198 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efbff9d-b303-430c-b06c-36b79284a3f1-combined-ca-bundle\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.001225 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-scripts\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.001253 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4rwr\" (UniqueName: \"kubernetes.io/projected/2efbff9d-b303-430c-b06c-36b79284a3f1-kube-api-access-l4rwr\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.001271 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efbff9d-b303-430c-b06c-36b79284a3f1-ovn-controller-tls-certs\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.001287 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-etc-ovs\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.001314 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2efbff9d-b303-430c-b06c-36b79284a3f1-scripts\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.001373 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2efbff9d-b303-430c-b06c-36b79284a3f1-var-run-ovn\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.001427 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-var-log\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.001448 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-var-run\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.001468 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc9kk\" (UniqueName: \"kubernetes.io/projected/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-kube-api-access-jc9kk\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.106951 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2efbff9d-b303-430c-b06c-36b79284a3f1-scripts\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.107346 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2efbff9d-b303-430c-b06c-36b79284a3f1-var-run-ovn\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.107425 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-var-log\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.107447 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-var-run\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.107472 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc9kk\" (UniqueName: \"kubernetes.io/projected/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-kube-api-access-jc9kk\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.107498 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2efbff9d-b303-430c-b06c-36b79284a3f1-var-run\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.107515 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-var-lib\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.107557 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2efbff9d-b303-430c-b06c-36b79284a3f1-var-log-ovn\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.107578 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efbff9d-b303-430c-b06c-36b79284a3f1-combined-ca-bundle\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.107603 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-scripts\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.107632 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4rwr\" (UniqueName: \"kubernetes.io/projected/2efbff9d-b303-430c-b06c-36b79284a3f1-kube-api-access-l4rwr\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.107654 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efbff9d-b303-430c-b06c-36b79284a3f1-ovn-controller-tls-certs\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.107671 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-etc-ovs\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.108117 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2efbff9d-b303-430c-b06c-36b79284a3f1-var-run-ovn\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.108271 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-var-lib\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.108422 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-var-log\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.108475 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2efbff9d-b303-430c-b06c-36b79284a3f1-var-run\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.108483 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-var-run\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.108666 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2efbff9d-b303-430c-b06c-36b79284a3f1-var-log-ovn\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.109495 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2efbff9d-b303-430c-b06c-36b79284a3f1-scripts\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.109785 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-etc-ovs\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.110853 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-scripts\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.114067 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efbff9d-b303-430c-b06c-36b79284a3f1-ovn-controller-tls-certs\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.126172 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4rwr\" (UniqueName: \"kubernetes.io/projected/2efbff9d-b303-430c-b06c-36b79284a3f1-kube-api-access-l4rwr\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.134042 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc9kk\" (UniqueName: \"kubernetes.io/projected/a510fbea-dfa0-48e9-9557-a9e7f75cae9a-kube-api-access-jc9kk\") pod \"ovn-controller-ovs-kxl4t\" (UID: \"a510fbea-dfa0-48e9-9557-a9e7f75cae9a\") " pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.135463 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efbff9d-b303-430c-b06c-36b79284a3f1-combined-ca-bundle\") pod \"ovn-controller-st4vx\" (UID: \"2efbff9d-b303-430c-b06c-36b79284a3f1\") " pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.212425 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-st4vx" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.251487 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.315444 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.316734 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.319609 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.319778 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.319962 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-28l9t" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.320136 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.320828 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.327007 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.412941 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.413003 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.413029 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.413139 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-43a79ad3-f8d8-40ac-99df-22bc4fb3a323\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-43a79ad3-f8d8-40ac-99df-22bc4fb3a323\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.413201 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-config\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.413276 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv6ph\" (UniqueName: \"kubernetes.io/projected/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-kube-api-access-kv6ph\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.413345 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.413394 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.515820 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-43a79ad3-f8d8-40ac-99df-22bc4fb3a323\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-43a79ad3-f8d8-40ac-99df-22bc4fb3a323\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.515898 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-config\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.515961 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv6ph\" (UniqueName: \"kubernetes.io/projected/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-kube-api-access-kv6ph\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.516010 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.516067 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.516687 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.516189 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.517030 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.517055 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.517883 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.518033 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-config\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.525616 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.529297 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.529346 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-43a79ad3-f8d8-40ac-99df-22bc4fb3a323\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-43a79ad3-f8d8-40ac-99df-22bc4fb3a323\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8b0cc5320e87dfca7031587118484bdc717db2220b423dbea7b428ee28c372ff/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.529397 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.530171 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.533464 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv6ph\" (UniqueName: \"kubernetes.io/projected/c42ab514-0d06-4182-9ef7-6bcd9fb2afd8-kube-api-access-kv6ph\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.561076 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-43a79ad3-f8d8-40ac-99df-22bc4fb3a323\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-43a79ad3-f8d8-40ac-99df-22bc4fb3a323\") pod \"ovsdbserver-nb-0\" (UID: \"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:23:56 crc kubenswrapper[4835]: I0216 15:23:56.670474 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 15:24:00 crc kubenswrapper[4835]: I0216 15:24:00.801065 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 15:24:00 crc kubenswrapper[4835]: I0216 15:24:00.806113 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:00 crc kubenswrapper[4835]: I0216 15:24:00.809518 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 15:24:00 crc kubenswrapper[4835]: I0216 15:24:00.809758 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 15:24:00 crc kubenswrapper[4835]: I0216 15:24:00.809978 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 15:24:00 crc kubenswrapper[4835]: I0216 15:24:00.810149 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-qqnlw" Feb 16 15:24:00 crc kubenswrapper[4835]: I0216 15:24:00.817027 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 15:24:00 crc kubenswrapper[4835]: I0216 15:24:00.899847 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2715c396-7f97-4169-9a8c-4cc46f834974\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2715c396-7f97-4169-9a8c-4cc46f834974\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:00 crc kubenswrapper[4835]: I0216 15:24:00.900197 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1949747b-769a-41b5-96cc-5d51092d1615-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:00 crc kubenswrapper[4835]: I0216 15:24:00.900221 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1949747b-769a-41b5-96cc-5d51092d1615-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:00 crc kubenswrapper[4835]: I0216 15:24:00.900253 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvbkn\" (UniqueName: \"kubernetes.io/projected/1949747b-769a-41b5-96cc-5d51092d1615-kube-api-access-lvbkn\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:00 crc kubenswrapper[4835]: I0216 15:24:00.900300 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1949747b-769a-41b5-96cc-5d51092d1615-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:00 crc kubenswrapper[4835]: I0216 15:24:00.900317 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1949747b-769a-41b5-96cc-5d51092d1615-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:00 crc kubenswrapper[4835]: I0216 15:24:00.900336 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1949747b-769a-41b5-96cc-5d51092d1615-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:00 crc kubenswrapper[4835]: I0216 15:24:00.900359 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1949747b-769a-41b5-96cc-5d51092d1615-config\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.001811 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2715c396-7f97-4169-9a8c-4cc46f834974\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2715c396-7f97-4169-9a8c-4cc46f834974\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.001882 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1949747b-769a-41b5-96cc-5d51092d1615-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.001903 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1949747b-769a-41b5-96cc-5d51092d1615-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.001928 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvbkn\" (UniqueName: \"kubernetes.io/projected/1949747b-769a-41b5-96cc-5d51092d1615-kube-api-access-lvbkn\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.001994 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1949747b-769a-41b5-96cc-5d51092d1615-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.002015 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1949747b-769a-41b5-96cc-5d51092d1615-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.002031 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1949747b-769a-41b5-96cc-5d51092d1615-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.002055 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1949747b-769a-41b5-96cc-5d51092d1615-config\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.002705 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1949747b-769a-41b5-96cc-5d51092d1615-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.003051 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1949747b-769a-41b5-96cc-5d51092d1615-config\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.003547 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1949747b-769a-41b5-96cc-5d51092d1615-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.009998 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1949747b-769a-41b5-96cc-5d51092d1615-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.011031 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.011069 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2715c396-7f97-4169-9a8c-4cc46f834974\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2715c396-7f97-4169-9a8c-4cc46f834974\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/56e032aa3600557f428cf6f0699c65b9045d6eb010abe68fa115943cbc61bb8b/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.023240 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1949747b-769a-41b5-96cc-5d51092d1615-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.027404 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1949747b-769a-41b5-96cc-5d51092d1615-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.032255 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvbkn\" (UniqueName: \"kubernetes.io/projected/1949747b-769a-41b5-96cc-5d51092d1615-kube-api-access-lvbkn\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.199200 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2715c396-7f97-4169-9a8c-4cc46f834974\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2715c396-7f97-4169-9a8c-4cc46f834974\") pod \"ovsdbserver-sb-0\" (UID: \"1949747b-769a-41b5-96cc-5d51092d1615\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.439347 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.748345 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb"] Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.754162 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.760596 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb"] Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.762311 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.762498 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.762498 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.762637 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.764008 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-hj2ms" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.814410 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/f0df7f89-f92f-4f95-8150-5f864d8d4134-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-95gmb\" (UID: \"f0df7f89-f92f-4f95-8150-5f864d8d4134\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.814461 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0df7f89-f92f-4f95-8150-5f864d8d4134-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-95gmb\" (UID: \"f0df7f89-f92f-4f95-8150-5f864d8d4134\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.814492 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0df7f89-f92f-4f95-8150-5f864d8d4134-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-95gmb\" (UID: \"f0df7f89-f92f-4f95-8150-5f864d8d4134\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.814664 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w6nr\" (UniqueName: \"kubernetes.io/projected/f0df7f89-f92f-4f95-8150-5f864d8d4134-kube-api-access-9w6nr\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-95gmb\" (UID: \"f0df7f89-f92f-4f95-8150-5f864d8d4134\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.814842 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/f0df7f89-f92f-4f95-8150-5f864d8d4134-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-95gmb\" (UID: \"f0df7f89-f92f-4f95-8150-5f864d8d4134\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.912055 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt"] Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.913859 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.918491 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/f0df7f89-f92f-4f95-8150-5f864d8d4134-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-95gmb\" (UID: \"f0df7f89-f92f-4f95-8150-5f864d8d4134\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.918575 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/f0df7f89-f92f-4f95-8150-5f864d8d4134-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-95gmb\" (UID: \"f0df7f89-f92f-4f95-8150-5f864d8d4134\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.918608 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0df7f89-f92f-4f95-8150-5f864d8d4134-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-95gmb\" (UID: \"f0df7f89-f92f-4f95-8150-5f864d8d4134\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.918636 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0df7f89-f92f-4f95-8150-5f864d8d4134-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-95gmb\" (UID: \"f0df7f89-f92f-4f95-8150-5f864d8d4134\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.918669 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w6nr\" (UniqueName: \"kubernetes.io/projected/f0df7f89-f92f-4f95-8150-5f864d8d4134-kube-api-access-9w6nr\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-95gmb\" (UID: \"f0df7f89-f92f-4f95-8150-5f864d8d4134\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.919630 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.920467 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0df7f89-f92f-4f95-8150-5f864d8d4134-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-95gmb\" (UID: \"f0df7f89-f92f-4f95-8150-5f864d8d4134\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.921884 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/f0df7f89-f92f-4f95-8150-5f864d8d4134-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-95gmb\" (UID: \"f0df7f89-f92f-4f95-8150-5f864d8d4134\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.921965 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.922690 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.922971 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0df7f89-f92f-4f95-8150-5f864d8d4134-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-95gmb\" (UID: \"f0df7f89-f92f-4f95-8150-5f864d8d4134\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.924355 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/f0df7f89-f92f-4f95-8150-5f864d8d4134-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-95gmb\" (UID: \"f0df7f89-f92f-4f95-8150-5f864d8d4134\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.946747 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt"] Feb 16 15:24:01 crc kubenswrapper[4835]: I0216 15:24:01.957649 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w6nr\" (UniqueName: \"kubernetes.io/projected/f0df7f89-f92f-4f95-8150-5f864d8d4134-kube-api-access-9w6nr\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-95gmb\" (UID: \"f0df7f89-f92f-4f95-8150-5f864d8d4134\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.011017 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb"] Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.012071 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.018350 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.018546 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.019834 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/15cb4b80-ac3e-407f-ac7d-b18c4f936241-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.019890 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/15cb4b80-ac3e-407f-ac7d-b18c4f936241-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.019931 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv5lv\" (UniqueName: \"kubernetes.io/projected/15cb4b80-ac3e-407f-ac7d-b18c4f936241-kube-api-access-cv5lv\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.019966 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15cb4b80-ac3e-407f-ac7d-b18c4f936241-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.019988 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/15cb4b80-ac3e-407f-ac7d-b18c4f936241-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.020013 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15cb4b80-ac3e-407f-ac7d-b18c4f936241-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.073573 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.097003 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb"] Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.130826 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/15cb4b80-ac3e-407f-ac7d-b18c4f936241-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.130896 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv5lv\" (UniqueName: \"kubernetes.io/projected/15cb4b80-ac3e-407f-ac7d-b18c4f936241-kube-api-access-cv5lv\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.130933 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15cb4b80-ac3e-407f-ac7d-b18c4f936241-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.130966 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/3638d231-c31c-4620-b3e1-d45083acee56-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb\" (UID: \"3638d231-c31c-4620-b3e1-d45083acee56\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.130992 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/15cb4b80-ac3e-407f-ac7d-b18c4f936241-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.131013 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15cb4b80-ac3e-407f-ac7d-b18c4f936241-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.131032 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/3638d231-c31c-4620-b3e1-d45083acee56-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb\" (UID: \"3638d231-c31c-4620-b3e1-d45083acee56\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.131057 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8tbg\" (UniqueName: \"kubernetes.io/projected/3638d231-c31c-4620-b3e1-d45083acee56-kube-api-access-d8tbg\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb\" (UID: \"3638d231-c31c-4620-b3e1-d45083acee56\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.131092 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3638d231-c31c-4620-b3e1-d45083acee56-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb\" (UID: \"3638d231-c31c-4620-b3e1-d45083acee56\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.131124 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/15cb4b80-ac3e-407f-ac7d-b18c4f936241-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.131165 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3638d231-c31c-4620-b3e1-d45083acee56-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb\" (UID: \"3638d231-c31c-4620-b3e1-d45083acee56\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.132042 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15cb4b80-ac3e-407f-ac7d-b18c4f936241-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.132147 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15cb4b80-ac3e-407f-ac7d-b18c4f936241-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.135088 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/15cb4b80-ac3e-407f-ac7d-b18c4f936241-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.150272 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/15cb4b80-ac3e-407f-ac7d-b18c4f936241-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.151045 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k"] Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.152419 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.160740 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.160990 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv5lv\" (UniqueName: \"kubernetes.io/projected/15cb4b80-ac3e-407f-ac7d-b18c4f936241-kube-api-access-cv5lv\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.161122 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.161329 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.161604 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.162142 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.163197 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.167492 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/15cb4b80-ac3e-407f-ac7d-b18c4f936241-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-ts8jt\" (UID: \"15cb4b80-ac3e-407f-ac7d-b18c4f936241\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.167634 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k"] Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.176562 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2"] Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.178016 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.181828 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-l9cmc" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.191597 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2"] Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.232662 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/3638d231-c31c-4620-b3e1-d45083acee56-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb\" (UID: \"3638d231-c31c-4620-b3e1-d45083acee56\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.232818 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.232912 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8tbg\" (UniqueName: \"kubernetes.io/projected/3638d231-c31c-4620-b3e1-d45083acee56-kube-api-access-d8tbg\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb\" (UID: \"3638d231-c31c-4620-b3e1-d45083acee56\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.232997 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0724b33e-42df-4030-98fe-cf498befbf2e-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.233076 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.233156 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0724b33e-42df-4030-98fe-cf498befbf2e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.233244 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.233322 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0724b33e-42df-4030-98fe-cf498befbf2e-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.233404 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3638d231-c31c-4620-b3e1-d45083acee56-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb\" (UID: \"3638d231-c31c-4620-b3e1-d45083acee56\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.233484 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.233592 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzhrg\" (UniqueName: \"kubernetes.io/projected/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-kube-api-access-lzhrg\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.233695 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.233788 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0724b33e-42df-4030-98fe-cf498befbf2e-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.233863 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0724b33e-42df-4030-98fe-cf498befbf2e-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.233932 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0724b33e-42df-4030-98fe-cf498befbf2e-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.234075 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3638d231-c31c-4620-b3e1-d45083acee56-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb\" (UID: \"3638d231-c31c-4620-b3e1-d45083acee56\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.234153 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.234225 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.234297 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3638d231-c31c-4620-b3e1-d45083acee56-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb\" (UID: \"3638d231-c31c-4620-b3e1-d45083acee56\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.234383 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0724b33e-42df-4030-98fe-cf498befbf2e-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.234466 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0724b33e-42df-4030-98fe-cf498befbf2e-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.234558 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvdcc\" (UniqueName: \"kubernetes.io/projected/0724b33e-42df-4030-98fe-cf498befbf2e-kube-api-access-vvdcc\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.234654 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.234744 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/3638d231-c31c-4620-b3e1-d45083acee56-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb\" (UID: \"3638d231-c31c-4620-b3e1-d45083acee56\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.235022 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3638d231-c31c-4620-b3e1-d45083acee56-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb\" (UID: \"3638d231-c31c-4620-b3e1-d45083acee56\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.238792 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/3638d231-c31c-4620-b3e1-d45083acee56-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb\" (UID: \"3638d231-c31c-4620-b3e1-d45083acee56\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.248312 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8tbg\" (UniqueName: \"kubernetes.io/projected/3638d231-c31c-4620-b3e1-d45083acee56-kube-api-access-d8tbg\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb\" (UID: \"3638d231-c31c-4620-b3e1-d45083acee56\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.252767 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/3638d231-c31c-4620-b3e1-d45083acee56-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb\" (UID: \"3638d231-c31c-4620-b3e1-d45083acee56\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.305664 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336501 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336588 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0724b33e-42df-4030-98fe-cf498befbf2e-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336607 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0724b33e-42df-4030-98fe-cf498befbf2e-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336625 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0724b33e-42df-4030-98fe-cf498befbf2e-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336648 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336662 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336691 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0724b33e-42df-4030-98fe-cf498befbf2e-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336713 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0724b33e-42df-4030-98fe-cf498befbf2e-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336734 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvdcc\" (UniqueName: \"kubernetes.io/projected/0724b33e-42df-4030-98fe-cf498befbf2e-kube-api-access-vvdcc\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336760 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336800 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336819 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0724b33e-42df-4030-98fe-cf498befbf2e-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336838 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336856 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0724b33e-42df-4030-98fe-cf498befbf2e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336875 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: E0216 15:24:02.336850 4835 secret.go:188] Couldn't get secret openstack/cloudkitty-lokistack-gateway-http: secret "cloudkitty-lokistack-gateway-http" not found Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336890 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0724b33e-42df-4030-98fe-cf498befbf2e-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336911 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.336932 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzhrg\" (UniqueName: \"kubernetes.io/projected/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-kube-api-access-lzhrg\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: E0216 15:24:02.336978 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0724b33e-42df-4030-98fe-cf498befbf2e-tls-secret podName:0724b33e-42df-4030-98fe-cf498befbf2e nodeName:}" failed. No retries permitted until 2026-02-16 15:24:02.836930354 +0000 UTC m=+992.128923319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/0724b33e-42df-4030-98fe-cf498befbf2e-tls-secret") pod "cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" (UID: "0724b33e-42df-4030-98fe-cf498befbf2e") : secret "cloudkitty-lokistack-gateway-http" not found Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.337785 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0724b33e-42df-4030-98fe-cf498befbf2e-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.337960 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: E0216 15:24:02.338021 4835 secret.go:188] Couldn't get secret openstack/cloudkitty-lokistack-gateway-http: secret "cloudkitty-lokistack-gateway-http" not found Feb 16 15:24:02 crc kubenswrapper[4835]: E0216 15:24:02.338058 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-tls-secret podName:cbd2b381-c620-4ff8-9942-e9f5b1c484d4 nodeName:}" failed. No retries permitted until 2026-02-16 15:24:02.838045793 +0000 UTC m=+992.130038688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-tls-secret") pod "cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" (UID: "cbd2b381-c620-4ff8-9942-e9f5b1c484d4") : secret "cloudkitty-lokistack-gateway-http" not found Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.338549 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0724b33e-42df-4030-98fe-cf498befbf2e-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.338632 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0724b33e-42df-4030-98fe-cf498befbf2e-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.338690 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0724b33e-42df-4030-98fe-cf498befbf2e-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.339313 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.339561 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.340070 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.340328 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0724b33e-42df-4030-98fe-cf498befbf2e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.352746 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.352865 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.352873 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0724b33e-42df-4030-98fe-cf498befbf2e-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.352933 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0724b33e-42df-4030-98fe-cf498befbf2e-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.353743 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.355589 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzhrg\" (UniqueName: \"kubernetes.io/projected/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-kube-api-access-lzhrg\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.356966 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvdcc\" (UniqueName: \"kubernetes.io/projected/0724b33e-42df-4030-98fe-cf498befbf2e-kube-api-access-vvdcc\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.357408 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.845593 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0724b33e-42df-4030-98fe-cf498befbf2e-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.845710 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.849665 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0724b33e-42df-4030-98fe-cf498befbf2e-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-rqzc2\" (UID: \"0724b33e-42df-4030-98fe-cf498befbf2e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.849865 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cbd2b381-c620-4ff8-9942-e9f5b1c484d4-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-g5p5k\" (UID: \"cbd2b381-c620-4ff8-9942-e9f5b1c484d4\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.904889 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.906728 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.909592 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.909763 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.911737 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.911870 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.947082 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e246a943-0c6d-4738-8a73-d3e576819680-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.947198 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/e246a943-0c6d-4738-8a73-d3e576819680-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.947247 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.947266 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e246a943-0c6d-4738-8a73-d3e576819680-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.947294 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e246a943-0c6d-4738-8a73-d3e576819680-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.947319 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e246a943-0c6d-4738-8a73-d3e576819680-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.947337 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.947361 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dsz8\" (UniqueName: \"kubernetes.io/projected/e246a943-0c6d-4738-8a73-d3e576819680-kube-api-access-5dsz8\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.985140 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.986363 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.988440 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.988997 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Feb 16 15:24:02 crc kubenswrapper[4835]: I0216 15:24:02.994294 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.048783 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/134112d9-c103-4429-b224-13589ad6d931-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.048904 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.048929 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e246a943-0c6d-4738-8a73-d3e576819680-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.048956 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e246a943-0c6d-4738-8a73-d3e576819680-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.048985 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/134112d9-c103-4429-b224-13589ad6d931-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.049007 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e246a943-0c6d-4738-8a73-d3e576819680-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.049029 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.049089 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/134112d9-c103-4429-b224-13589ad6d931-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.049137 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dsz8\" (UniqueName: \"kubernetes.io/projected/e246a943-0c6d-4738-8a73-d3e576819680-kube-api-access-5dsz8\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.049230 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e246a943-0c6d-4738-8a73-d3e576819680-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.049282 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/134112d9-c103-4429-b224-13589ad6d931-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.049334 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.049824 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e246a943-0c6d-4738-8a73-d3e576819680-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.050080 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e246a943-0c6d-4738-8a73-d3e576819680-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.050755 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.050978 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/134112d9-c103-4429-b224-13589ad6d931-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.051067 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-599x8\" (UniqueName: \"kubernetes.io/projected/134112d9-c103-4429-b224-13589ad6d931-kube-api-access-599x8\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.051097 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.051230 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/e246a943-0c6d-4738-8a73-d3e576819680-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.055180 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/e246a943-0c6d-4738-8a73-d3e576819680-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.060158 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/e246a943-0c6d-4738-8a73-d3e576819680-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.060968 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e246a943-0c6d-4738-8a73-d3e576819680-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.067232 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dsz8\" (UniqueName: \"kubernetes.io/projected/e246a943-0c6d-4738-8a73-d3e576819680-kube-api-access-5dsz8\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.081497 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.082791 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.083432 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.085586 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.085817 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.105047 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.110321 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"e246a943-0c6d-4738-8a73-d3e576819680\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.143934 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.152827 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/134112d9-c103-4429-b224-13589ad6d931-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.152873 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/134112d9-c103-4429-b224-13589ad6d931-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.152918 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/134112d9-c103-4429-b224-13589ad6d931-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.152950 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/134112d9-c103-4429-b224-13589ad6d931-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.152978 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-599x8\" (UniqueName: \"kubernetes.io/projected/134112d9-c103-4429-b224-13589ad6d931-kube-api-access-599x8\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.152998 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.153044 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/134112d9-c103-4429-b224-13589ad6d931-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.154322 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.154711 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/134112d9-c103-4429-b224-13589ad6d931-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.155789 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/134112d9-c103-4429-b224-13589ad6d931-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.161066 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/134112d9-c103-4429-b224-13589ad6d931-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.162136 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/134112d9-c103-4429-b224-13589ad6d931-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.171149 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/134112d9-c103-4429-b224-13589ad6d931-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.171663 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-599x8\" (UniqueName: \"kubernetes.io/projected/134112d9-c103-4429-b224-13589ad6d931-kube-api-access-599x8\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.182804 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"134112d9-c103-4429-b224-13589ad6d931\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.240514 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.253989 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdzwl\" (UniqueName: \"kubernetes.io/projected/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-kube-api-access-gdzwl\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.254064 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.254131 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.254207 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.254472 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.254583 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.254808 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.311063 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.356503 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.356572 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.356633 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.357586 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.357858 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.357896 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.357916 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.357944 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdzwl\" (UniqueName: \"kubernetes.io/projected/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-kube-api-access-gdzwl\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.361684 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.362499 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.363277 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.373547 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.384288 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.392260 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.401135 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdzwl\" (UniqueName: \"kubernetes.io/projected/ef4ec5b3-b0ad-4a36-a280-67da2ffb786e-kube-api-access-gdzwl\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:03 crc kubenswrapper[4835]: I0216 15:24:03.405082 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:05 crc kubenswrapper[4835]: I0216 15:24:05.438617 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 15:24:07 crc kubenswrapper[4835]: E0216 15:24:07.978764 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 15:24:07 crc kubenswrapper[4835]: E0216 15:24:07.979250 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hplmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-v7x8x_openstack(b48c6a8d-4722-4111-8c27-82ea2f235c24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:24:07 crc kubenswrapper[4835]: E0216 15:24:07.980427 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" podUID="b48c6a8d-4722-4111-8c27-82ea2f235c24" Feb 16 15:24:08 crc kubenswrapper[4835]: E0216 15:24:08.055141 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 15:24:08 crc kubenswrapper[4835]: E0216 15:24:08.055331 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-npmn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-ct24m_openstack(c94cc9c3-bdc5-4b04-8535-c4f006231943): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:24:08 crc kubenswrapper[4835]: E0216 15:24:08.057038 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-ct24m" podUID="c94cc9c3-bdc5-4b04-8535-c4f006231943" Feb 16 15:24:08 crc kubenswrapper[4835]: E0216 15:24:08.080732 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 15:24:08 crc kubenswrapper[4835]: E0216 15:24:08.081024 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gsm58,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-bxw5q_openstack(fc7fe9d4-0930-4bcb-a675-6dc2f58c4168): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:24:08 crc kubenswrapper[4835]: E0216 15:24:08.082193 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-bxw5q" podUID="fc7fe9d4-0930-4bcb-a675-6dc2f58c4168" Feb 16 15:24:08 crc kubenswrapper[4835]: E0216 15:24:08.094776 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 15:24:08 crc kubenswrapper[4835]: E0216 15:24:08.094977 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4kqqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-jn278_openstack(0620e270-4a0a-41b9-8e14-a9f29ce6b9aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:24:08 crc kubenswrapper[4835]: E0216 15:24:08.096272 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-jn278" podUID="0620e270-4a0a-41b9-8e14-a9f29ce6b9aa" Feb 16 15:24:08 crc kubenswrapper[4835]: I0216 15:24:08.482811 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 15:24:08 crc kubenswrapper[4835]: I0216 15:24:08.614162 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"32e52175-bb63-4076-a7af-4cf969b90ec6","Type":"ContainerStarted","Data":"4b5867b068829bd9212a911bfc54d55f2ef0f3deb7635632a6099ddc17c28688"} Feb 16 15:24:08 crc kubenswrapper[4835]: E0216 15:24:08.618360 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-jn278" podUID="0620e270-4a0a-41b9-8e14-a9f29ce6b9aa" Feb 16 15:24:08 crc kubenswrapper[4835]: E0216 15:24:08.618576 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" podUID="b48c6a8d-4722-4111-8c27-82ea2f235c24" Feb 16 15:24:08 crc kubenswrapper[4835]: I0216 15:24:08.912551 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb"] Feb 16 15:24:08 crc kubenswrapper[4835]: I0216 15:24:08.923388 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 15:24:08 crc kubenswrapper[4835]: W0216 15:24:08.944197 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3638d231_c31c_4620_b3e1_d45083acee56.slice/crio-901f553122363eec06f67c78bd156f7391704513e111bc5f5dfc92c490cddb9b WatchSource:0}: Error finding container 901f553122363eec06f67c78bd156f7391704513e111bc5f5dfc92c490cddb9b: Status 404 returned error can't find the container with id 901f553122363eec06f67c78bd156f7391704513e111bc5f5dfc92c490cddb9b Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.398140 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bxw5q" Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.403549 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ct24m" Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.466292 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc7fe9d4-0930-4bcb-a675-6dc2f58c4168-config\") pod \"fc7fe9d4-0930-4bcb-a675-6dc2f58c4168\" (UID: \"fc7fe9d4-0930-4bcb-a675-6dc2f58c4168\") " Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.466366 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c94cc9c3-bdc5-4b04-8535-c4f006231943-config\") pod \"c94cc9c3-bdc5-4b04-8535-c4f006231943\" (UID: \"c94cc9c3-bdc5-4b04-8535-c4f006231943\") " Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.466620 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsm58\" (UniqueName: \"kubernetes.io/projected/fc7fe9d4-0930-4bcb-a675-6dc2f58c4168-kube-api-access-gsm58\") pod \"fc7fe9d4-0930-4bcb-a675-6dc2f58c4168\" (UID: \"fc7fe9d4-0930-4bcb-a675-6dc2f58c4168\") " Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.466749 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c94cc9c3-bdc5-4b04-8535-c4f006231943-dns-svc\") pod \"c94cc9c3-bdc5-4b04-8535-c4f006231943\" (UID: \"c94cc9c3-bdc5-4b04-8535-c4f006231943\") " Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.466818 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npmn5\" (UniqueName: \"kubernetes.io/projected/c94cc9c3-bdc5-4b04-8535-c4f006231943-kube-api-access-npmn5\") pod \"c94cc9c3-bdc5-4b04-8535-c4f006231943\" (UID: \"c94cc9c3-bdc5-4b04-8535-c4f006231943\") " Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.467233 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc7fe9d4-0930-4bcb-a675-6dc2f58c4168-config" (OuterVolumeSpecName: "config") pod "fc7fe9d4-0930-4bcb-a675-6dc2f58c4168" (UID: "fc7fe9d4-0930-4bcb-a675-6dc2f58c4168"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.467240 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c94cc9c3-bdc5-4b04-8535-c4f006231943-config" (OuterVolumeSpecName: "config") pod "c94cc9c3-bdc5-4b04-8535-c4f006231943" (UID: "c94cc9c3-bdc5-4b04-8535-c4f006231943"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.467302 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c94cc9c3-bdc5-4b04-8535-c4f006231943-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c94cc9c3-bdc5-4b04-8535-c4f006231943" (UID: "c94cc9c3-bdc5-4b04-8535-c4f006231943"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.472468 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc7fe9d4-0930-4bcb-a675-6dc2f58c4168-kube-api-access-gsm58" (OuterVolumeSpecName: "kube-api-access-gsm58") pod "fc7fe9d4-0930-4bcb-a675-6dc2f58c4168" (UID: "fc7fe9d4-0930-4bcb-a675-6dc2f58c4168"). InnerVolumeSpecName "kube-api-access-gsm58". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.487327 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c94cc9c3-bdc5-4b04-8535-c4f006231943-kube-api-access-npmn5" (OuterVolumeSpecName: "kube-api-access-npmn5") pod "c94cc9c3-bdc5-4b04-8535-c4f006231943" (UID: "c94cc9c3-bdc5-4b04-8535-c4f006231943"). InnerVolumeSpecName "kube-api-access-npmn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.551814 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.582172 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k"] Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.588922 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c94cc9c3-bdc5-4b04-8535-c4f006231943-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.588999 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npmn5\" (UniqueName: \"kubernetes.io/projected/c94cc9c3-bdc5-4b04-8535-c4f006231943-kube-api-access-npmn5\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.589015 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc7fe9d4-0930-4bcb-a675-6dc2f58c4168-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.589026 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c94cc9c3-bdc5-4b04-8535-c4f006231943-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.589043 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsm58\" (UniqueName: \"kubernetes.io/projected/fc7fe9d4-0930-4bcb-a675-6dc2f58c4168-kube-api-access-gsm58\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:09 crc kubenswrapper[4835]: W0216 15:24:09.615836 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c673663_5be6_4ed4_b2b5_9a80e72391c6.slice/crio-d7500b8680180ee428a4378662cf2018e9c2c24791608fe5f509677b2fdb3ba8 WatchSource:0}: Error finding container d7500b8680180ee428a4378662cf2018e9c2c24791608fe5f509677b2fdb3ba8: Status 404 returned error can't find the container with id d7500b8680180ee428a4378662cf2018e9c2c24791608fe5f509677b2fdb3ba8 Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.634102 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bxw5q" event={"ID":"fc7fe9d4-0930-4bcb-a675-6dc2f58c4168","Type":"ContainerDied","Data":"f475c18d9efd754dc221a318c04e2304987e12dc1ee3cc88cd2e5cf293ff0d07"} Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.634189 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bxw5q" Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.642016 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.642050 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"752083aa-579e-46dc-addb-b923b394b393","Type":"ContainerStarted","Data":"bf459cdf0d5b3405c361c5c15caf1b261d807fee25a4d4d2cb02020803e3b5ac"} Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.644702 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda","Type":"ContainerStarted","Data":"be6c3c1bc70c253782f3f2c577f9305b7696dcc38a63022b24140a906582a59a"} Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.653374 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8da5ef68-09c6-4938-99a8-b728f03b4d14","Type":"ContainerStarted","Data":"3c0aeb9abcd108ec253d7b14fa3fb75ff7553a28badec6b62c7c50190e56804a"} Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.660590 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ct24m" event={"ID":"c94cc9c3-bdc5-4b04-8535-c4f006231943","Type":"ContainerDied","Data":"7e2d163da5f5d2d02d13deffff0bced3dbce7fb9141b433add0b87b22d0b950e"} Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.660666 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ct24m" Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.663838 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.667384 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"23555de7-4851-4730-b8b3-9d788622420a","Type":"ContainerStarted","Data":"b5a2df571bac19f8f26644b37e189c422d134be643cf26627464dabe9162a02d"} Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.675740 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" event={"ID":"cbd2b381-c620-4ff8-9942-e9f5b1c484d4","Type":"ContainerStarted","Data":"95487735ca565d5a3f3b4807a501697e66d9233ea6b5dd2c78b85219499ef4f6"} Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.678024 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-st4vx"] Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.687727 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-st4vx" event={"ID":"2efbff9d-b303-430c-b06c-36b79284a3f1","Type":"ContainerStarted","Data":"eefa95baa9dccbb0ae53b763a39bf735cb33cf811c90b286c888cbef7d9fdd9b"} Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.692301 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 15:24:09 crc kubenswrapper[4835]: W0216 15:24:09.698970 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc42ab514_0d06_4182_9ef7_6bcd9fb2afd8.slice/crio-17d76832115e1a72d2b2ae788890a092f145a1fcabdf46c74277339400870e9a WatchSource:0}: Error finding container 17d76832115e1a72d2b2ae788890a092f145a1fcabdf46c74277339400870e9a: Status 404 returned error can't find the container with id 17d76832115e1a72d2b2ae788890a092f145a1fcabdf46c74277339400870e9a Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.699139 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"e246a943-0c6d-4738-8a73-d3e576819680","Type":"ContainerStarted","Data":"e16eb7f39451a7e97093b249cfc2c44717945fb91bc1acc5e6c724e4eaefa8a1"} Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.728324 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"117011cd-1ad8-4aff-b5d4-49bce3381f02","Type":"ContainerStarted","Data":"491b805512d2083b15d40277355653fe73b4602a770df00607d66ed1daff2a50"} Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.730889 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" event={"ID":"3638d231-c31c-4620-b3e1-d45083acee56","Type":"ContainerStarted","Data":"901f553122363eec06f67c78bd156f7391704513e111bc5f5dfc92c490cddb9b"} Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.735356 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.748966 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e2800ecb-4ec5-4930-a820-d9680894ad21","Type":"ContainerStarted","Data":"4facf151be714740510472dc244ccd713668c8193975d0f88b7dac871634b4f1"} Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.760304 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.798595 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bxw5q"] Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.809562 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bxw5q"] Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.831620 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ct24m"] Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.845088 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ct24m"] Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.858904 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kxl4t"] Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.945542 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb"] Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.957974 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt"] Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.965345 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 15:24:09 crc kubenswrapper[4835]: I0216 15:24:09.971026 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2"] Feb 16 15:24:09 crc kubenswrapper[4835]: W0216 15:24:09.979902 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef4ec5b3_b0ad_4a36_a280_67da2ffb786e.slice/crio-3f49fc9f5c5ec52cafdce67db129cf627312d783c65a31db947e6325b987cc3b WatchSource:0}: Error finding container 3f49fc9f5c5ec52cafdce67db129cf627312d783c65a31db947e6325b987cc3b: Status 404 returned error can't find the container with id 3f49fc9f5c5ec52cafdce67db129cf627312d783c65a31db947e6325b987cc3b Feb 16 15:24:09 crc kubenswrapper[4835]: W0216 15:24:09.995107 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0724b33e_42df_4030_98fe_cf498befbf2e.slice/crio-d108f8f42ed1f4d49cbb15abb40c677820b501d3c05bd034d410eb9619131af9 WatchSource:0}: Error finding container d108f8f42ed1f4d49cbb15abb40c677820b501d3c05bd034d410eb9619131af9: Status 404 returned error can't find the container with id d108f8f42ed1f4d49cbb15abb40c677820b501d3c05bd034d410eb9619131af9 Feb 16 15:24:10 crc kubenswrapper[4835]: I0216 15:24:10.033744 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 15:24:10 crc kubenswrapper[4835]: W0216 15:24:10.045339 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod134112d9_c103_4429_b224_13589ad6d931.slice/crio-baa4d50bb274f94ff3249fbb3f1253eabd8da817d338e58f0aa0e57aaf908eff WatchSource:0}: Error finding container baa4d50bb274f94ff3249fbb3f1253eabd8da817d338e58f0aa0e57aaf908eff: Status 404 returned error can't find the container with id baa4d50bb274f94ff3249fbb3f1253eabd8da817d338e58f0aa0e57aaf908eff Feb 16 15:24:10 crc kubenswrapper[4835]: I0216 15:24:10.058910 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 15:24:10 crc kubenswrapper[4835]: E0216 15:24:10.064204 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-compactor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=compactor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-compactor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-compactor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-599x8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-compactor-0_openstack(134112d9-c103-4429-b224-13589ad6d931): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:24:10 crc kubenswrapper[4835]: E0216 15:24:10.065359 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="134112d9-c103-4429-b224-13589ad6d931" Feb 16 15:24:10 crc kubenswrapper[4835]: W0216 15:24:10.072011 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1949747b_769a_41b5_96cc_5d51092d1615.slice/crio-5c4c742ab4fbd32b05fac9d53c044e1f147bed826678c803f0395f2ddc1d5c7d WatchSource:0}: Error finding container 5c4c742ab4fbd32b05fac9d53c044e1f147bed826678c803f0395f2ddc1d5c7d: Status 404 returned error can't find the container with id 5c4c742ab4fbd32b05fac9d53c044e1f147bed826678c803f0395f2ddc1d5c7d Feb 16 15:24:10 crc kubenswrapper[4835]: E0216 15:24:10.075066 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n688h67dh5b4h67bh58ch578hf4hd4h5bbh5fdh5dhb9h54ch5b7hfh66dh566h5b9hc8h574h5bhfch666h559hcdh5b8h5dfh567h97hd7h687h9q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvbkn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(1949747b-769a-41b5-96cc-5d51092d1615): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:24:10 crc kubenswrapper[4835]: E0216 15:24:10.077478 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n688h67dh5b4h67bh58ch578hf4hd4h5bbh5fdh5dhb9h54ch5b7hfh66dh566h5b9hc8h574h5bhfch666h559hcdh5b8h5dfh567h97hd7h687h9q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvbkn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(1949747b-769a-41b5-96cc-5d51092d1615): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:24:10 crc kubenswrapper[4835]: E0216 15:24:10.079697 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack/ovsdbserver-sb-0" podUID="1949747b-769a-41b5-96cc-5d51092d1615" Feb 16 15:24:10 crc kubenswrapper[4835]: I0216 15:24:10.757889 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kxl4t" event={"ID":"a510fbea-dfa0-48e9-9557-a9e7f75cae9a","Type":"ContainerStarted","Data":"6f2586d473fc22d83a5e4b524fc11e64e8f7761afb711430e79f019bdce94136"} Feb 16 15:24:10 crc kubenswrapper[4835]: I0216 15:24:10.761645 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" event={"ID":"15cb4b80-ac3e-407f-ac7d-b18c4f936241","Type":"ContainerStarted","Data":"90ad49b02846c62e662945ac8400e87eff726bc36db634273a6dfad165231929"} Feb 16 15:24:10 crc kubenswrapper[4835]: I0216 15:24:10.763629 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9c673663-5be6-4ed4-b2b5-9a80e72391c6","Type":"ContainerStarted","Data":"801b9789de8f5c11c3939a5264c1048b9969650b0f2637dd6e44532e0063ad8e"} Feb 16 15:24:10 crc kubenswrapper[4835]: I0216 15:24:10.763672 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9c673663-5be6-4ed4-b2b5-9a80e72391c6","Type":"ContainerStarted","Data":"d7500b8680180ee428a4378662cf2018e9c2c24791608fe5f509677b2fdb3ba8"} Feb 16 15:24:10 crc kubenswrapper[4835]: I0216 15:24:10.766012 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e","Type":"ContainerStarted","Data":"3f49fc9f5c5ec52cafdce67db129cf627312d783c65a31db947e6325b987cc3b"} Feb 16 15:24:10 crc kubenswrapper[4835]: I0216 15:24:10.771633 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8","Type":"ContainerStarted","Data":"17d76832115e1a72d2b2ae788890a092f145a1fcabdf46c74277339400870e9a"} Feb 16 15:24:10 crc kubenswrapper[4835]: I0216 15:24:10.773015 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"134112d9-c103-4429-b224-13589ad6d931","Type":"ContainerStarted","Data":"baa4d50bb274f94ff3249fbb3f1253eabd8da817d338e58f0aa0e57aaf908eff"} Feb 16 15:24:10 crc kubenswrapper[4835]: I0216 15:24:10.774297 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1949747b-769a-41b5-96cc-5d51092d1615","Type":"ContainerStarted","Data":"5c4c742ab4fbd32b05fac9d53c044e1f147bed826678c803f0395f2ddc1d5c7d"} Feb 16 15:24:10 crc kubenswrapper[4835]: E0216 15:24:10.774865 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="134112d9-c103-4429-b224-13589ad6d931" Feb 16 15:24:10 crc kubenswrapper[4835]: E0216 15:24:10.776139 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-sb-0" podUID="1949747b-769a-41b5-96cc-5d51092d1615" Feb 16 15:24:10 crc kubenswrapper[4835]: I0216 15:24:10.776284 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" event={"ID":"0724b33e-42df-4030-98fe-cf498befbf2e","Type":"ContainerStarted","Data":"d108f8f42ed1f4d49cbb15abb40c677820b501d3c05bd034d410eb9619131af9"} Feb 16 15:24:10 crc kubenswrapper[4835]: I0216 15:24:10.778897 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" event={"ID":"f0df7f89-f92f-4f95-8150-5f864d8d4134","Type":"ContainerStarted","Data":"c8b536fd006502246f0f37114a7edd70cb731e4c87716b54704ceff629258335"} Feb 16 15:24:11 crc kubenswrapper[4835]: I0216 15:24:11.411472 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c94cc9c3-bdc5-4b04-8535-c4f006231943" path="/var/lib/kubelet/pods/c94cc9c3-bdc5-4b04-8535-c4f006231943/volumes" Feb 16 15:24:11 crc kubenswrapper[4835]: I0216 15:24:11.411881 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc7fe9d4-0930-4bcb-a675-6dc2f58c4168" path="/var/lib/kubelet/pods/fc7fe9d4-0930-4bcb-a675-6dc2f58c4168/volumes" Feb 16 15:24:11 crc kubenswrapper[4835]: E0216 15:24:11.789165 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-sb-0" podUID="1949747b-769a-41b5-96cc-5d51092d1615" Feb 16 15:24:11 crc kubenswrapper[4835]: E0216 15:24:11.794317 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="134112d9-c103-4429-b224-13589ad6d931" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.031055 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.032934 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-ingester,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=ingester -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:wal,ReadOnly:false,MountPath:/tmp/wal,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ingester-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ingester-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dsz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-ingester-0_openstack(e246a943-0c6d-4738-8a73-d3e576819680): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.036036 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="e246a943-0c6d-4738-8a73-d3e576819680" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.059380 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.059603 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d5h5b4h549hb6h5fh96h5bdhbbh56bh5c9h699h559h56dh696hdbh5chbfhb8h5d4hc7h547h87h589h5b5h66h7dh5dbh65dh84h5b8h58bhd8q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jc9kk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-kxl4t_openstack(a510fbea-dfa0-48e9-9557-a9e7f75cae9a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.060767 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-kxl4t" podUID="a510fbea-dfa0-48e9-9557-a9e7f75cae9a" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.159775 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.159987 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-query-frontend,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=query-frontend -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-query-frontend-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-query-frontend-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8tbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb_openstack(3638d231-c31c-4620-b3e1-d45083acee56): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.162100 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-query-frontend\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" podUID="3638d231-c31c-4620-b3e1-d45083acee56" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.516092 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.516617 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:gateway,Image:registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934,Command:[],Args:[--debug.name=lokistack-gateway --web.listen=0.0.0.0:8080 --web.internal.listen=0.0.0.0:8081 --web.healthchecks.url=https://localhost:8080 --log.level=warn --logs.read.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.tail.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.write.endpoint=https://cloudkitty-lokistack-distributor-http.openstack.svc.cluster.local:3100 --logs.write-timeout=4m0s --rbac.config=/etc/lokistack-gateway/rbac.yaml --tenants.config=/etc/lokistack-gateway/tenants.yaml --server.read-timeout=48s --server.write-timeout=6m0s --tls.min-version=VersionTLS12 --tls.server.cert-file=/var/run/tls/http/server/tls.crt --tls.server.key-file=/var/run/tls/http/server/tls.key --tls.healthchecks.server-ca-file=/var/run/ca/server/service-ca.crt --tls.healthchecks.server-name=cloudkitty-lokistack-gateway-http.openstack.svc.cluster.local --tls.internal.server.cert-file=/var/run/tls/http/server/tls.crt --tls.internal.server.key-file=/var/run/tls/http/server/tls.key --tls.min-version=VersionTLS12 --tls.cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logs.tls.ca-file=/var/run/ca/upstream/service-ca.crt --logs.tls.cert-file=/var/run/tls/http/upstream/tls.crt --logs.tls.key-file=/var/run/tls/http/upstream/tls.key --tls.client-auth-type=RequestClientCert],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},ContainerPort{Name:public,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rbac,ReadOnly:true,MountPath:/etc/lokistack-gateway/rbac.yaml,SubPath:rbac.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tenants,ReadOnly:true,MountPath:/etc/lokistack-gateway/tenants.yaml,SubPath:tenants.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lokistack-gateway,ReadOnly:true,MountPath:/etc/lokistack-gateway/lokistack-gateway.rego,SubPath:lokistack-gateway.rego,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-secret,ReadOnly:true,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-client-http,ReadOnly:true,MountPath:/var/run/tls/http/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-ca-bundle,ReadOnly:false,MountPath:/var/run/tenants-ca/cloudkitty,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvdcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/live,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:12,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-gateway-7f8685b49f-rqzc2_openstack(0724b33e-42df-4030-98fe-cf498befbf2e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.517234 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.517485 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-querier,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=querier -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cv5lv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-querier-58c84b5844-ts8jt_openstack(15cb4b80-ac3e-407f-ac7d-b18c4f936241): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.518146 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" podUID="0724b33e-42df-4030-98fe-cf498befbf2e" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.518617 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" podUID="15cb4b80-ac3e-407f-ac7d-b18c4f936241" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.521581 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.521718 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:gateway,Image:registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934,Command:[],Args:[--debug.name=lokistack-gateway --web.listen=0.0.0.0:8080 --web.internal.listen=0.0.0.0:8081 --web.healthchecks.url=https://localhost:8080 --log.level=warn --logs.read.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.tail.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.write.endpoint=https://cloudkitty-lokistack-distributor-http.openstack.svc.cluster.local:3100 --logs.write-timeout=4m0s --rbac.config=/etc/lokistack-gateway/rbac.yaml --tenants.config=/etc/lokistack-gateway/tenants.yaml --server.read-timeout=48s --server.write-timeout=6m0s --tls.min-version=VersionTLS12 --tls.server.cert-file=/var/run/tls/http/server/tls.crt --tls.server.key-file=/var/run/tls/http/server/tls.key --tls.healthchecks.server-ca-file=/var/run/ca/server/service-ca.crt --tls.healthchecks.server-name=cloudkitty-lokistack-gateway-http.openstack.svc.cluster.local --tls.internal.server.cert-file=/var/run/tls/http/server/tls.crt --tls.internal.server.key-file=/var/run/tls/http/server/tls.key --tls.min-version=VersionTLS12 --tls.cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logs.tls.ca-file=/var/run/ca/upstream/service-ca.crt --logs.tls.cert-file=/var/run/tls/http/upstream/tls.crt --logs.tls.key-file=/var/run/tls/http/upstream/tls.key --tls.client-auth-type=RequestClientCert],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},ContainerPort{Name:public,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rbac,ReadOnly:true,MountPath:/etc/lokistack-gateway/rbac.yaml,SubPath:rbac.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tenants,ReadOnly:true,MountPath:/etc/lokistack-gateway/tenants.yaml,SubPath:tenants.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lokistack-gateway,ReadOnly:true,MountPath:/etc/lokistack-gateway/lokistack-gateway.rego,SubPath:lokistack-gateway.rego,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-secret,ReadOnly:true,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-client-http,ReadOnly:true,MountPath:/var/run/tls/http/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-ca-bundle,ReadOnly:false,MountPath:/var/run/tenants-ca/cloudkitty,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lzhrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/live,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:12,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-gateway-7f8685b49f-g5p5k_openstack(cbd2b381-c620-4ff8-9942-e9f5b1c484d4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.523152 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" podUID="cbd2b381-c620-4ff8-9942-e9f5b1c484d4" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.528918 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.529213 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-distributor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=distributor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9w6nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-distributor-585d9bcbc-95gmb_openstack(f0df7f89-f92f-4f95-8150-5f864d8d4134): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.530374 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" podUID="f0df7f89-f92f-4f95-8150-5f864d8d4134" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.552981 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.553164 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-index-gateway,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=index-gateway -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-index-gateway-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-index-gateway-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdzwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-index-gateway-0_openstack(ef4ec5b3-b0ad-4a36-a280-67da2ffb786e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.554309 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="ef4ec5b3-b0ad-4a36-a280-67da2ffb786e" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.856736 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.857040 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d5h5b4h549hb6h5fh96h5bdhbbh56bh5c9h699h559h56dh696hdbh5chbfhb8h5d4hc7h547h87h589h5b5h66h7dh5dbh65dh84h5b8h58bhd8q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4rwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-st4vx_openstack(2efbff9d-b303-430c-b06c-36b79284a3f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.858224 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-st4vx" podUID="2efbff9d-b303-430c-b06c-36b79284a3f1" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.894126 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-query-frontend\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" podUID="3638d231-c31c-4620-b3e1-d45083acee56" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.894346 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" podUID="f0df7f89-f92f-4f95-8150-5f864d8d4134" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.894378 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-st4vx" podUID="2efbff9d-b303-430c-b06c-36b79284a3f1" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.894405 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" podUID="cbd2b381-c620-4ff8-9942-e9f5b1c484d4" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.894461 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" podUID="0724b33e-42df-4030-98fe-cf498befbf2e" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.894825 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" podUID="15cb4b80-ac3e-407f-ac7d-b18c4f936241" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.896739 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-kxl4t" podUID="a510fbea-dfa0-48e9-9557-a9e7f75cae9a" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.896892 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="e246a943-0c6d-4738-8a73-d3e576819680" Feb 16 15:24:24 crc kubenswrapper[4835]: E0216 15:24:24.896985 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="ef4ec5b3-b0ad-4a36-a280-67da2ffb786e" Feb 16 15:24:25 crc kubenswrapper[4835]: E0216 15:24:25.190235 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Feb 16 15:24:25 crc kubenswrapper[4835]: E0216 15:24:25.191075 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h58ch568h594h66fh7bh5f6h64bh596h67ch559h578hfch65bh55bh656h548h94h9h64h695hf8hb4h675h584h554h99h5c9h64fh566h569h657q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kv6ph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(c42ab514-0d06-4182-9ef7-6bcd9fb2afd8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:24:26 crc kubenswrapper[4835]: E0216 15:24:26.125158 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 16 15:24:26 crc kubenswrapper[4835]: E0216 15:24:26.125636 4835 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 16 15:24:26 crc kubenswrapper[4835]: E0216 15:24:26.125813 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x6tfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(117011cd-1ad8-4aff-b5d4-49bce3381f02): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 15:24:26 crc kubenswrapper[4835]: E0216 15:24:26.127045 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="117011cd-1ad8-4aff-b5d4-49bce3381f02" Feb 16 15:24:26 crc kubenswrapper[4835]: I0216 15:24:26.906837 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8da5ef68-09c6-4938-99a8-b728f03b4d14","Type":"ContainerStarted","Data":"17be4b382c274f2740fef7a40d4d3a1ca69ce9d2e0c30ddfde8d408ed02ec44b"} Feb 16 15:24:26 crc kubenswrapper[4835]: I0216 15:24:26.907082 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 15:24:26 crc kubenswrapper[4835]: I0216 15:24:26.909797 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1949747b-769a-41b5-96cc-5d51092d1615","Type":"ContainerStarted","Data":"f838db963aff73c2c36377afcfbb416559cc3335f0003138b4ca91c6c8aeba54"} Feb 16 15:24:26 crc kubenswrapper[4835]: I0216 15:24:26.913162 4835 generic.go:334] "Generic (PLEG): container finished" podID="0620e270-4a0a-41b9-8e14-a9f29ce6b9aa" containerID="6f6400249d7986bfb5ab30edc41ad9fd02a606e379535e8f3c8ebe6b2939dac2" exitCode=0 Feb 16 15:24:26 crc kubenswrapper[4835]: I0216 15:24:26.913230 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jn278" event={"ID":"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa","Type":"ContainerDied","Data":"6f6400249d7986bfb5ab30edc41ad9fd02a606e379535e8f3c8ebe6b2939dac2"} Feb 16 15:24:26 crc kubenswrapper[4835]: I0216 15:24:26.915842 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"32e52175-bb63-4076-a7af-4cf969b90ec6","Type":"ContainerStarted","Data":"274e3770cf54a8d97f0e785c6359dbd9b64fa1f0e4088735a4a2ab2b97cd2ecb"} Feb 16 15:24:26 crc kubenswrapper[4835]: I0216 15:24:26.920667 4835 generic.go:334] "Generic (PLEG): container finished" podID="b48c6a8d-4722-4111-8c27-82ea2f235c24" containerID="36bd1b15dcdae06493e8df36b7dda472b73fbd5f513b680d38b9d458c3e106bb" exitCode=0 Feb 16 15:24:26 crc kubenswrapper[4835]: I0216 15:24:26.920744 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" event={"ID":"b48c6a8d-4722-4111-8c27-82ea2f235c24","Type":"ContainerDied","Data":"36bd1b15dcdae06493e8df36b7dda472b73fbd5f513b680d38b9d458c3e106bb"} Feb 16 15:24:26 crc kubenswrapper[4835]: I0216 15:24:26.925420 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"752083aa-579e-46dc-addb-b923b394b393","Type":"ContainerStarted","Data":"900f958d5c5ddb804b9b0c3ef6020fb312456717ee583dd17e92cea0055b6321"} Feb 16 15:24:26 crc kubenswrapper[4835]: E0216 15:24:26.925852 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="117011cd-1ad8-4aff-b5d4-49bce3381f02" Feb 16 15:24:26 crc kubenswrapper[4835]: I0216 15:24:26.930177 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=21.637540259 podStartE2EDuration="37.92922129s" podCreationTimestamp="2026-02-16 15:23:49 +0000 UTC" firstStartedPulling="2026-02-16 15:24:08.950941329 +0000 UTC m=+998.242934224" lastFinishedPulling="2026-02-16 15:24:25.24262236 +0000 UTC m=+1014.534615255" observedRunningTime="2026-02-16 15:24:26.921208613 +0000 UTC m=+1016.213201518" watchObservedRunningTime="2026-02-16 15:24:26.92922129 +0000 UTC m=+1016.221214185" Feb 16 15:24:27 crc kubenswrapper[4835]: I0216 15:24:27.933707 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" event={"ID":"b48c6a8d-4722-4111-8c27-82ea2f235c24","Type":"ContainerStarted","Data":"5517e082acacfe2c6b404f6f167448b402988c6ecf35c6a5741bff41615d3aff"} Feb 16 15:24:27 crc kubenswrapper[4835]: I0216 15:24:27.959553 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" podStartSLOduration=2.830640131 podStartE2EDuration="41.959498693s" podCreationTimestamp="2026-02-16 15:23:46 +0000 UTC" firstStartedPulling="2026-02-16 15:23:47.038786742 +0000 UTC m=+976.330779637" lastFinishedPulling="2026-02-16 15:24:26.167645304 +0000 UTC m=+1015.459638199" observedRunningTime="2026-02-16 15:24:27.952218474 +0000 UTC m=+1017.244211389" watchObservedRunningTime="2026-02-16 15:24:27.959498693 +0000 UTC m=+1017.251491598" Feb 16 15:24:28 crc kubenswrapper[4835]: E0216 15:24:28.440339 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="c42ab514-0d06-4182-9ef7-6bcd9fb2afd8" Feb 16 15:24:28 crc kubenswrapper[4835]: I0216 15:24:28.948733 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"134112d9-c103-4429-b224-13589ad6d931","Type":"ContainerStarted","Data":"3265fabd093ca007393d5b9338ef8c70912dd3223ab07751b64310b3ade745fa"} Feb 16 15:24:28 crc kubenswrapper[4835]: I0216 15:24:28.949489 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:28 crc kubenswrapper[4835]: I0216 15:24:28.951303 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1949747b-769a-41b5-96cc-5d51092d1615","Type":"ContainerStarted","Data":"be61092b2e61ed59a2a8816b39d80fe5e39bdc358707357cba71e37026c68154"} Feb 16 15:24:28 crc kubenswrapper[4835]: I0216 15:24:28.953326 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8","Type":"ContainerStarted","Data":"3755a1b37e5d2a416ae3c0f7400ab5760f060d30066cc5a10040099789a47a29"} Feb 16 15:24:28 crc kubenswrapper[4835]: I0216 15:24:28.956274 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jn278" event={"ID":"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa","Type":"ContainerStarted","Data":"200319bc87946a0032994708200b2e8708cd1244d547d35cb387375842dd9b02"} Feb 16 15:24:28 crc kubenswrapper[4835]: E0216 15:24:28.956882 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="c42ab514-0d06-4182-9ef7-6bcd9fb2afd8" Feb 16 15:24:28 crc kubenswrapper[4835]: I0216 15:24:28.957070 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-jn278" Feb 16 15:24:28 crc kubenswrapper[4835]: I0216 15:24:28.980974 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=10.835382294 podStartE2EDuration="27.980958206s" podCreationTimestamp="2026-02-16 15:24:01 +0000 UTC" firstStartedPulling="2026-02-16 15:24:10.064009258 +0000 UTC m=+999.356002153" lastFinishedPulling="2026-02-16 15:24:27.20958517 +0000 UTC m=+1016.501578065" observedRunningTime="2026-02-16 15:24:28.977065685 +0000 UTC m=+1018.269058580" watchObservedRunningTime="2026-02-16 15:24:28.980958206 +0000 UTC m=+1018.272951101" Feb 16 15:24:29 crc kubenswrapper[4835]: I0216 15:24:29.003878 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-jn278" podStartSLOduration=5.064258094 podStartE2EDuration="44.00386188s" podCreationTimestamp="2026-02-16 15:23:45 +0000 UTC" firstStartedPulling="2026-02-16 15:23:46.721165726 +0000 UTC m=+976.013158621" lastFinishedPulling="2026-02-16 15:24:25.660769512 +0000 UTC m=+1014.952762407" observedRunningTime="2026-02-16 15:24:28.997050463 +0000 UTC m=+1018.289043358" watchObservedRunningTime="2026-02-16 15:24:29.00386188 +0000 UTC m=+1018.295854775" Feb 16 15:24:29 crc kubenswrapper[4835]: I0216 15:24:29.021073 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.885514145 podStartE2EDuration="30.021057146s" podCreationTimestamp="2026-02-16 15:23:59 +0000 UTC" firstStartedPulling="2026-02-16 15:24:10.074906491 +0000 UTC m=+999.366899386" lastFinishedPulling="2026-02-16 15:24:27.210449492 +0000 UTC m=+1016.502442387" observedRunningTime="2026-02-16 15:24:29.01776642 +0000 UTC m=+1018.309759325" watchObservedRunningTime="2026-02-16 15:24:29.021057146 +0000 UTC m=+1018.313050041" Feb 16 15:24:29 crc kubenswrapper[4835]: I0216 15:24:29.966264 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e2800ecb-4ec5-4930-a820-d9680894ad21","Type":"ContainerStarted","Data":"d0fedda1ba590b70251a73c818a04c00be922fd47dcc229ed09e2be530b31a4a"} Feb 16 15:24:29 crc kubenswrapper[4835]: I0216 15:24:29.968580 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"23555de7-4851-4730-b8b3-9d788622420a","Type":"ContainerStarted","Data":"0d3c31a26def715f5af143c32c4cd181d65ccc189e35189bf9335a5005566221"} Feb 16 15:24:29 crc kubenswrapper[4835]: E0216 15:24:29.971945 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="c42ab514-0d06-4182-9ef7-6bcd9fb2afd8" Feb 16 15:24:30 crc kubenswrapper[4835]: I0216 15:24:30.980663 4835 generic.go:334] "Generic (PLEG): container finished" podID="32e52175-bb63-4076-a7af-4cf969b90ec6" containerID="274e3770cf54a8d97f0e785c6359dbd9b64fa1f0e4088735a4a2ab2b97cd2ecb" exitCode=0 Feb 16 15:24:30 crc kubenswrapper[4835]: I0216 15:24:30.980774 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"32e52175-bb63-4076-a7af-4cf969b90ec6","Type":"ContainerDied","Data":"274e3770cf54a8d97f0e785c6359dbd9b64fa1f0e4088735a4a2ab2b97cd2ecb"} Feb 16 15:24:30 crc kubenswrapper[4835]: I0216 15:24:30.983770 4835 generic.go:334] "Generic (PLEG): container finished" podID="752083aa-579e-46dc-addb-b923b394b393" containerID="900f958d5c5ddb804b9b0c3ef6020fb312456717ee583dd17e92cea0055b6321" exitCode=0 Feb 16 15:24:30 crc kubenswrapper[4835]: I0216 15:24:30.985094 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"752083aa-579e-46dc-addb-b923b394b393","Type":"ContainerDied","Data":"900f958d5c5ddb804b9b0c3ef6020fb312456717ee583dd17e92cea0055b6321"} Feb 16 15:24:31 crc kubenswrapper[4835]: I0216 15:24:31.439965 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:31 crc kubenswrapper[4835]: I0216 15:24:31.440322 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:31 crc kubenswrapper[4835]: I0216 15:24:31.476424 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" Feb 16 15:24:31 crc kubenswrapper[4835]: I0216 15:24:31.513418 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:31 crc kubenswrapper[4835]: I0216 15:24:31.996227 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"32e52175-bb63-4076-a7af-4cf969b90ec6","Type":"ContainerStarted","Data":"5eece9455b079f235bcee670fbdf6f61b5f9194df80fce01f4f80f6e487c06d8"} Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.000357 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"752083aa-579e-46dc-addb-b923b394b393","Type":"ContainerStarted","Data":"5614e48c987f29c48ed011db0f1b15474446e97d008416cee7b84d58fc0cfdb3"} Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.028267 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=27.837708888999998 podStartE2EDuration="45.028248116s" podCreationTimestamp="2026-02-16 15:23:47 +0000 UTC" firstStartedPulling="2026-02-16 15:24:08.05234713 +0000 UTC m=+997.344340025" lastFinishedPulling="2026-02-16 15:24:25.242886357 +0000 UTC m=+1014.534879252" observedRunningTime="2026-02-16 15:24:32.020920916 +0000 UTC m=+1021.312913821" watchObservedRunningTime="2026-02-16 15:24:32.028248116 +0000 UTC m=+1021.320241021" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.048602 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.057993 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.400355656 podStartE2EDuration="44.057972556s" podCreationTimestamp="2026-02-16 15:23:48 +0000 UTC" firstStartedPulling="2026-02-16 15:24:08.69645501 +0000 UTC m=+997.988447905" lastFinishedPulling="2026-02-16 15:24:25.3540719 +0000 UTC m=+1014.646064805" observedRunningTime="2026-02-16 15:24:32.049134847 +0000 UTC m=+1021.341127772" watchObservedRunningTime="2026-02-16 15:24:32.057972556 +0000 UTC m=+1021.349965451" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.356226 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jn278"] Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.356436 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-jn278" podUID="0620e270-4a0a-41b9-8e14-a9f29ce6b9aa" containerName="dnsmasq-dns" containerID="cri-o://200319bc87946a0032994708200b2e8708cd1244d547d35cb387375842dd9b02" gracePeriod=10 Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.398601 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-2gz9j"] Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.435166 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2gz9j"] Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.435259 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.438129 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.451938 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-gqjb6"] Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.453287 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.460096 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.482051 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-gqjb6"] Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.545870 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tr4b\" (UniqueName: \"kubernetes.io/projected/25c80024-2ae0-4be9-82f5-c61fbb0ab518-kube-api-access-6tr4b\") pod \"dnsmasq-dns-7f896c8c65-gqjb6\" (UID: \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\") " pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.545914 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-config\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.545948 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgmbd\" (UniqueName: \"kubernetes.io/projected/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-kube-api-access-cgmbd\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.545983 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-config\") pod \"dnsmasq-dns-7f896c8c65-gqjb6\" (UID: \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\") " pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.546006 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-ovs-rundir\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.546025 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-gqjb6\" (UID: \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\") " pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.546042 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.546065 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-gqjb6\" (UID: \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\") " pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.546112 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-ovn-rundir\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.546133 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-combined-ca-bundle\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.647198 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-ovn-rundir\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.647244 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-combined-ca-bundle\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.647326 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tr4b\" (UniqueName: \"kubernetes.io/projected/25c80024-2ae0-4be9-82f5-c61fbb0ab518-kube-api-access-6tr4b\") pod \"dnsmasq-dns-7f896c8c65-gqjb6\" (UID: \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\") " pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.647350 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-config\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.647365 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgmbd\" (UniqueName: \"kubernetes.io/projected/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-kube-api-access-cgmbd\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.647396 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-config\") pod \"dnsmasq-dns-7f896c8c65-gqjb6\" (UID: \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\") " pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.647418 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-ovs-rundir\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.647435 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-gqjb6\" (UID: \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\") " pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.647451 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.647475 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-gqjb6\" (UID: \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\") " pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.647556 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-ovn-rundir\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.649040 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-config\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.649075 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-gqjb6\" (UID: \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\") " pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.649109 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-ovs-rundir\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.649184 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-config\") pod \"dnsmasq-dns-7f896c8c65-gqjb6\" (UID: \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\") " pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.649759 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-gqjb6\" (UID: \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\") " pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.653196 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-combined-ca-bundle\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.653592 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.664705 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgmbd\" (UniqueName: \"kubernetes.io/projected/80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce-kube-api-access-cgmbd\") pod \"ovn-controller-metrics-2gz9j\" (UID: \"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce\") " pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.666789 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tr4b\" (UniqueName: \"kubernetes.io/projected/25c80024-2ae0-4be9-82f5-c61fbb0ab518-kube-api-access-6tr4b\") pod \"dnsmasq-dns-7f896c8c65-gqjb6\" (UID: \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\") " pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.781488 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.781598 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2gz9j" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.811438 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v7x8x"] Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.811670 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" podUID="b48c6a8d-4722-4111-8c27-82ea2f235c24" containerName="dnsmasq-dns" containerID="cri-o://5517e082acacfe2c6b404f6f167448b402988c6ecf35c6a5741bff41615d3aff" gracePeriod=10 Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.818114 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.860940 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wfxbr"] Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.862418 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.866860 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 15:24:32 crc kubenswrapper[4835]: I0216 15:24:32.947073 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wfxbr"] Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.010350 4835 generic.go:334] "Generic (PLEG): container finished" podID="0620e270-4a0a-41b9-8e14-a9f29ce6b9aa" containerID="200319bc87946a0032994708200b2e8708cd1244d547d35cb387375842dd9b02" exitCode=0 Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.010448 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jn278" event={"ID":"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa","Type":"ContainerDied","Data":"200319bc87946a0032994708200b2e8708cd1244d547d35cb387375842dd9b02"} Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.015255 4835 generic.go:334] "Generic (PLEG): container finished" podID="b48c6a8d-4722-4111-8c27-82ea2f235c24" containerID="5517e082acacfe2c6b404f6f167448b402988c6ecf35c6a5741bff41615d3aff" exitCode=0 Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.015351 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" event={"ID":"b48c6a8d-4722-4111-8c27-82ea2f235c24","Type":"ContainerDied","Data":"5517e082acacfe2c6b404f6f167448b402988c6ecf35c6a5741bff41615d3aff"} Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.059218 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-config\") pod \"dnsmasq-dns-86db49b7ff-wfxbr\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.060186 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-wfxbr\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.060222 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5x5n\" (UniqueName: \"kubernetes.io/projected/6935249e-0331-48cf-9b65-1db34f680e9b-kube-api-access-l5x5n\") pod \"dnsmasq-dns-86db49b7ff-wfxbr\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.060243 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-wfxbr\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.060289 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-wfxbr\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.165347 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-wfxbr\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.165430 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5x5n\" (UniqueName: \"kubernetes.io/projected/6935249e-0331-48cf-9b65-1db34f680e9b-kube-api-access-l5x5n\") pod \"dnsmasq-dns-86db49b7ff-wfxbr\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.165465 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-wfxbr\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.166851 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-wfxbr\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.166880 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-wfxbr\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.167052 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-wfxbr\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.167242 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-config\") pod \"dnsmasq-dns-86db49b7ff-wfxbr\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.168215 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-wfxbr\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.168621 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-config\") pod \"dnsmasq-dns-86db49b7ff-wfxbr\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.184335 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5x5n\" (UniqueName: \"kubernetes.io/projected/6935249e-0331-48cf-9b65-1db34f680e9b-kube-api-access-l5x5n\") pod \"dnsmasq-dns-86db49b7ff-wfxbr\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.302851 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.348042 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.403116 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-gqjb6"] Feb 16 15:24:33 crc kubenswrapper[4835]: W0216 15:24:33.406592 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25c80024_2ae0_4be9_82f5_c61fbb0ab518.slice/crio-29cd260a8a6163fb0621aecd2a9e30e27bb784ecf5e8e3273d6b2eb88c921862 WatchSource:0}: Error finding container 29cd260a8a6163fb0621aecd2a9e30e27bb784ecf5e8e3273d6b2eb88c921862: Status 404 returned error can't find the container with id 29cd260a8a6163fb0621aecd2a9e30e27bb784ecf5e8e3273d6b2eb88c921862 Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.438931 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jn278" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.471581 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b48c6a8d-4722-4111-8c27-82ea2f235c24-dns-svc\") pod \"b48c6a8d-4722-4111-8c27-82ea2f235c24\" (UID: \"b48c6a8d-4722-4111-8c27-82ea2f235c24\") " Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.471938 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hplmh\" (UniqueName: \"kubernetes.io/projected/b48c6a8d-4722-4111-8c27-82ea2f235c24-kube-api-access-hplmh\") pod \"b48c6a8d-4722-4111-8c27-82ea2f235c24\" (UID: \"b48c6a8d-4722-4111-8c27-82ea2f235c24\") " Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.471970 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b48c6a8d-4722-4111-8c27-82ea2f235c24-config\") pod \"b48c6a8d-4722-4111-8c27-82ea2f235c24\" (UID: \"b48c6a8d-4722-4111-8c27-82ea2f235c24\") " Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.472681 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-config\") pod \"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa\" (UID: \"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa\") " Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.472725 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kqqx\" (UniqueName: \"kubernetes.io/projected/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-kube-api-access-4kqqx\") pod \"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa\" (UID: \"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa\") " Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.476778 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b48c6a8d-4722-4111-8c27-82ea2f235c24-kube-api-access-hplmh" (OuterVolumeSpecName: "kube-api-access-hplmh") pod "b48c6a8d-4722-4111-8c27-82ea2f235c24" (UID: "b48c6a8d-4722-4111-8c27-82ea2f235c24"). InnerVolumeSpecName "kube-api-access-hplmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.479783 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-kube-api-access-4kqqx" (OuterVolumeSpecName: "kube-api-access-4kqqx") pod "0620e270-4a0a-41b9-8e14-a9f29ce6b9aa" (UID: "0620e270-4a0a-41b9-8e14-a9f29ce6b9aa"). InnerVolumeSpecName "kube-api-access-4kqqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.491251 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2gz9j"] Feb 16 15:24:33 crc kubenswrapper[4835]: W0216 15:24:33.499795 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80e12b34_9a09_485e_b0b0_bcf8ee4ed5ce.slice/crio-922e5536ff66c17e9be64bc72003c2d9c918d453675b4640d5f4331d309e5d27 WatchSource:0}: Error finding container 922e5536ff66c17e9be64bc72003c2d9c918d453675b4640d5f4331d309e5d27: Status 404 returned error can't find the container with id 922e5536ff66c17e9be64bc72003c2d9c918d453675b4640d5f4331d309e5d27 Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.524260 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b48c6a8d-4722-4111-8c27-82ea2f235c24-config" (OuterVolumeSpecName: "config") pod "b48c6a8d-4722-4111-8c27-82ea2f235c24" (UID: "b48c6a8d-4722-4111-8c27-82ea2f235c24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.533162 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b48c6a8d-4722-4111-8c27-82ea2f235c24-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b48c6a8d-4722-4111-8c27-82ea2f235c24" (UID: "b48c6a8d-4722-4111-8c27-82ea2f235c24"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.543899 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-config" (OuterVolumeSpecName: "config") pod "0620e270-4a0a-41b9-8e14-a9f29ce6b9aa" (UID: "0620e270-4a0a-41b9-8e14-a9f29ce6b9aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.574319 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-dns-svc\") pod \"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa\" (UID: \"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa\") " Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.574696 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b48c6a8d-4722-4111-8c27-82ea2f235c24-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.574710 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hplmh\" (UniqueName: \"kubernetes.io/projected/b48c6a8d-4722-4111-8c27-82ea2f235c24-kube-api-access-hplmh\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.574720 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b48c6a8d-4722-4111-8c27-82ea2f235c24-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.574728 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.574736 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kqqx\" (UniqueName: \"kubernetes.io/projected/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-kube-api-access-4kqqx\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.616876 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0620e270-4a0a-41b9-8e14-a9f29ce6b9aa" (UID: "0620e270-4a0a-41b9-8e14-a9f29ce6b9aa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.676096 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:33 crc kubenswrapper[4835]: I0216 15:24:33.798995 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wfxbr"] Feb 16 15:24:33 crc kubenswrapper[4835]: W0216 15:24:33.821615 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6935249e_0331_48cf_9b65_1db34f680e9b.slice/crio-aa99c5c33082172300225d75a9e1ac29d0ebcdcd01ad653008087d602ecd5d02 WatchSource:0}: Error finding container aa99c5c33082172300225d75a9e1ac29d0ebcdcd01ad653008087d602ecd5d02: Status 404 returned error can't find the container with id aa99c5c33082172300225d75a9e1ac29d0ebcdcd01ad653008087d602ecd5d02 Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.022443 4835 generic.go:334] "Generic (PLEG): container finished" podID="6935249e-0331-48cf-9b65-1db34f680e9b" containerID="ef301357d26b9e1f0c3eacece96b8e4aa87ad8779875d7a6ae4f1d205e74b9c0" exitCode=0 Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.022508 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" event={"ID":"6935249e-0331-48cf-9b65-1db34f680e9b","Type":"ContainerDied","Data":"ef301357d26b9e1f0c3eacece96b8e4aa87ad8779875d7a6ae4f1d205e74b9c0"} Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.022548 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" event={"ID":"6935249e-0331-48cf-9b65-1db34f680e9b","Type":"ContainerStarted","Data":"aa99c5c33082172300225d75a9e1ac29d0ebcdcd01ad653008087d602ecd5d02"} Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.025996 4835 generic.go:334] "Generic (PLEG): container finished" podID="25c80024-2ae0-4be9-82f5-c61fbb0ab518" containerID="1f6781b7fe8fab18c4c056bf145c5a7b7ded309497fe0f5918e03a3c79355a34" exitCode=0 Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.026076 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" event={"ID":"25c80024-2ae0-4be9-82f5-c61fbb0ab518","Type":"ContainerDied","Data":"1f6781b7fe8fab18c4c056bf145c5a7b7ded309497fe0f5918e03a3c79355a34"} Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.026102 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" event={"ID":"25c80024-2ae0-4be9-82f5-c61fbb0ab518","Type":"ContainerStarted","Data":"29cd260a8a6163fb0621aecd2a9e30e27bb784ecf5e8e3273d6b2eb88c921862"} Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.027273 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2gz9j" event={"ID":"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce","Type":"ContainerStarted","Data":"8437e81da4e1f687bf63727e651de7cde82970eb5d404db43b070364981b0fa1"} Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.027310 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2gz9j" event={"ID":"80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce","Type":"ContainerStarted","Data":"922e5536ff66c17e9be64bc72003c2d9c918d453675b4640d5f4331d309e5d27"} Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.029440 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jn278" event={"ID":"0620e270-4a0a-41b9-8e14-a9f29ce6b9aa","Type":"ContainerDied","Data":"aae7382e5b9528e267db8dbbb3c2d1077903a97286bd3a450cf41287ca16aac4"} Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.029449 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jn278" Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.029497 4835 scope.go:117] "RemoveContainer" containerID="200319bc87946a0032994708200b2e8708cd1244d547d35cb387375842dd9b02" Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.032231 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" event={"ID":"b48c6a8d-4722-4111-8c27-82ea2f235c24","Type":"ContainerDied","Data":"a9def1ca71ba7f568edad5370b9e2669d17a410f2e710673c4e48bf406512d6c"} Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.032383 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-v7x8x" Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.073784 4835 scope.go:117] "RemoveContainer" containerID="6f6400249d7986bfb5ab30edc41ad9fd02a606e379535e8f3c8ebe6b2939dac2" Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.091074 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-2gz9j" podStartSLOduration=2.091055569 podStartE2EDuration="2.091055569s" podCreationTimestamp="2026-02-16 15:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:24:34.081444259 +0000 UTC m=+1023.373437154" watchObservedRunningTime="2026-02-16 15:24:34.091055569 +0000 UTC m=+1023.383048464" Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.106331 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jn278"] Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.119724 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jn278"] Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.126462 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v7x8x"] Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.132761 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-v7x8x"] Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.146941 4835 scope.go:117] "RemoveContainer" containerID="5517e082acacfe2c6b404f6f167448b402988c6ecf35c6a5741bff41615d3aff" Feb 16 15:24:34 crc kubenswrapper[4835]: I0216 15:24:34.187813 4835 scope.go:117] "RemoveContainer" containerID="36bd1b15dcdae06493e8df36b7dda472b73fbd5f513b680d38b9d458c3e106bb" Feb 16 15:24:35 crc kubenswrapper[4835]: I0216 15:24:35.055756 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" event={"ID":"6935249e-0331-48cf-9b65-1db34f680e9b","Type":"ContainerStarted","Data":"f0d2fc7aea7bf3ef44dfbc82dc0be5ddf1b1e7b1ae375315c189aea7a8ce7332"} Feb 16 15:24:35 crc kubenswrapper[4835]: I0216 15:24:35.056136 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:35 crc kubenswrapper[4835]: I0216 15:24:35.058601 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" event={"ID":"25c80024-2ae0-4be9-82f5-c61fbb0ab518","Type":"ContainerStarted","Data":"592c56bcf8cca97cacb4e5c9a646f7bedb8cb324577129494c52154c1dff401e"} Feb 16 15:24:35 crc kubenswrapper[4835]: I0216 15:24:35.058834 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:35 crc kubenswrapper[4835]: I0216 15:24:35.084927 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" podStartSLOduration=3.084901347 podStartE2EDuration="3.084901347s" podCreationTimestamp="2026-02-16 15:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:24:35.074505317 +0000 UTC m=+1024.366498212" watchObservedRunningTime="2026-02-16 15:24:35.084901347 +0000 UTC m=+1024.376894262" Feb 16 15:24:35 crc kubenswrapper[4835]: I0216 15:24:35.095767 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" podStartSLOduration=3.095740048 podStartE2EDuration="3.095740048s" podCreationTimestamp="2026-02-16 15:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:24:35.09041783 +0000 UTC m=+1024.382410725" watchObservedRunningTime="2026-02-16 15:24:35.095740048 +0000 UTC m=+1024.387732963" Feb 16 15:24:35 crc kubenswrapper[4835]: I0216 15:24:35.390588 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0620e270-4a0a-41b9-8e14-a9f29ce6b9aa" path="/var/lib/kubelet/pods/0620e270-4a0a-41b9-8e14-a9f29ce6b9aa/volumes" Feb 16 15:24:35 crc kubenswrapper[4835]: I0216 15:24:35.391846 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b48c6a8d-4722-4111-8c27-82ea2f235c24" path="/var/lib/kubelet/pods/b48c6a8d-4722-4111-8c27-82ea2f235c24/volumes" Feb 16 15:24:35 crc kubenswrapper[4835]: I0216 15:24:35.393116 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 15:24:36 crc kubenswrapper[4835]: I0216 15:24:36.071725 4835 generic.go:334] "Generic (PLEG): container finished" podID="23555de7-4851-4730-b8b3-9d788622420a" containerID="0d3c31a26def715f5af143c32c4cd181d65ccc189e35189bf9335a5005566221" exitCode=0 Feb 16 15:24:36 crc kubenswrapper[4835]: I0216 15:24:36.071803 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"23555de7-4851-4730-b8b3-9d788622420a","Type":"ContainerDied","Data":"0d3c31a26def715f5af143c32c4cd181d65ccc189e35189bf9335a5005566221"} Feb 16 15:24:36 crc kubenswrapper[4835]: I0216 15:24:36.074011 4835 generic.go:334] "Generic (PLEG): container finished" podID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerID="d0fedda1ba590b70251a73c818a04c00be922fd47dcc229ed09e2be530b31a4a" exitCode=0 Feb 16 15:24:36 crc kubenswrapper[4835]: I0216 15:24:36.074207 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e2800ecb-4ec5-4930-a820-d9680894ad21","Type":"ContainerDied","Data":"d0fedda1ba590b70251a73c818a04c00be922fd47dcc229ed09e2be530b31a4a"} Feb 16 15:24:37 crc kubenswrapper[4835]: I0216 15:24:37.084588 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"e246a943-0c6d-4738-8a73-d3e576819680","Type":"ContainerStarted","Data":"a6a986b5db048f508d1e3331987c262dd2ca1e294e1777f71e620fb6afe300c3"} Feb 16 15:24:37 crc kubenswrapper[4835]: I0216 15:24:37.089274 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" event={"ID":"0724b33e-42df-4030-98fe-cf498befbf2e","Type":"ContainerStarted","Data":"d8282ed367442c2f37bc42b59c7f761d85c250fb4c1afe6794dc474fdd9da0ad"} Feb 16 15:24:37 crc kubenswrapper[4835]: I0216 15:24:37.090416 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:37 crc kubenswrapper[4835]: I0216 15:24:37.094006 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" event={"ID":"cbd2b381-c620-4ff8-9942-e9f5b1c484d4","Type":"ContainerStarted","Data":"993a4122d7c12f6eed49f5bf9c1e328858c7c7ff83e90d2cbd912267437a1bdb"} Feb 16 15:24:37 crc kubenswrapper[4835]: I0216 15:24:37.094958 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:37 crc kubenswrapper[4835]: I0216 15:24:37.112064 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=-9223372000.742727 podStartE2EDuration="36.112048145s" podCreationTimestamp="2026-02-16 15:24:01 +0000 UTC" firstStartedPulling="2026-02-16 15:24:09.596216229 +0000 UTC m=+998.888209124" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:24:37.108129423 +0000 UTC m=+1026.400122318" watchObservedRunningTime="2026-02-16 15:24:37.112048145 +0000 UTC m=+1026.404041040" Feb 16 15:24:37 crc kubenswrapper[4835]: I0216 15:24:37.123274 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" Feb 16 15:24:37 crc kubenswrapper[4835]: I0216 15:24:37.124305 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" Feb 16 15:24:37 crc kubenswrapper[4835]: I0216 15:24:37.155228 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-g5p5k" podStartSLOduration=8.514239081 podStartE2EDuration="35.155213314s" podCreationTimestamp="2026-02-16 15:24:02 +0000 UTC" firstStartedPulling="2026-02-16 15:24:09.578610083 +0000 UTC m=+998.870602978" lastFinishedPulling="2026-02-16 15:24:36.219584316 +0000 UTC m=+1025.511577211" observedRunningTime="2026-02-16 15:24:37.129330843 +0000 UTC m=+1026.421323748" watchObservedRunningTime="2026-02-16 15:24:37.155213314 +0000 UTC m=+1026.447206209" Feb 16 15:24:37 crc kubenswrapper[4835]: I0216 15:24:37.172987 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-rqzc2" podStartSLOduration=-9223372001.681805 podStartE2EDuration="35.172971535s" podCreationTimestamp="2026-02-16 15:24:02 +0000 UTC" firstStartedPulling="2026-02-16 15:24:10.014068833 +0000 UTC m=+999.306061728" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:24:37.151745844 +0000 UTC m=+1026.443738749" watchObservedRunningTime="2026-02-16 15:24:37.172971535 +0000 UTC m=+1026.464964430" Feb 16 15:24:38 crc kubenswrapper[4835]: I0216 15:24:38.106169 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-st4vx" event={"ID":"2efbff9d-b303-430c-b06c-36b79284a3f1","Type":"ContainerStarted","Data":"01edec351b560b19c51beaa277c214b1ce1d7e3472af1c3e233956d958603f0d"} Feb 16 15:24:38 crc kubenswrapper[4835]: I0216 15:24:38.107302 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-st4vx" Feb 16 15:24:38 crc kubenswrapper[4835]: I0216 15:24:38.109873 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"ef4ec5b3-b0ad-4a36-a280-67da2ffb786e","Type":"ContainerStarted","Data":"f34041cf8a6ba91289dee4ca144ab9dc56aa8f4ed65d834f3a8446baeb0f5dc1"} Feb 16 15:24:38 crc kubenswrapper[4835]: I0216 15:24:38.124737 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-st4vx" podStartSLOduration=15.730210622 podStartE2EDuration="43.124716922s" podCreationTimestamp="2026-02-16 15:23:55 +0000 UTC" firstStartedPulling="2026-02-16 15:24:09.599323 +0000 UTC m=+998.891315895" lastFinishedPulling="2026-02-16 15:24:36.9938293 +0000 UTC m=+1026.285822195" observedRunningTime="2026-02-16 15:24:38.122437432 +0000 UTC m=+1027.414430328" watchObservedRunningTime="2026-02-16 15:24:38.124716922 +0000 UTC m=+1027.416709817" Feb 16 15:24:38 crc kubenswrapper[4835]: I0216 15:24:38.140423 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=-9223372000.714375 podStartE2EDuration="36.140400638s" podCreationTimestamp="2026-02-16 15:24:02 +0000 UTC" firstStartedPulling="2026-02-16 15:24:09.985466912 +0000 UTC m=+999.277459807" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:24:38.137580385 +0000 UTC m=+1027.429573300" watchObservedRunningTime="2026-02-16 15:24:38.140400638 +0000 UTC m=+1027.432393533" Feb 16 15:24:38 crc kubenswrapper[4835]: I0216 15:24:38.785856 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 15:24:38 crc kubenswrapper[4835]: I0216 15:24:38.785906 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 15:24:38 crc kubenswrapper[4835]: I0216 15:24:38.856146 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 15:24:39 crc kubenswrapper[4835]: I0216 15:24:39.193144 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 15:24:40 crc kubenswrapper[4835]: I0216 15:24:40.089997 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 15:24:40 crc kubenswrapper[4835]: I0216 15:24:40.090052 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 15:24:40 crc kubenswrapper[4835]: I0216 15:24:40.159065 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 15:24:40 crc kubenswrapper[4835]: I0216 15:24:40.224871 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.548705 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c06c-account-create-update-nnptn"] Feb 16 15:24:41 crc kubenswrapper[4835]: E0216 15:24:41.549312 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0620e270-4a0a-41b9-8e14-a9f29ce6b9aa" containerName="dnsmasq-dns" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.549324 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0620e270-4a0a-41b9-8e14-a9f29ce6b9aa" containerName="dnsmasq-dns" Feb 16 15:24:41 crc kubenswrapper[4835]: E0216 15:24:41.549335 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48c6a8d-4722-4111-8c27-82ea2f235c24" containerName="dnsmasq-dns" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.549341 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48c6a8d-4722-4111-8c27-82ea2f235c24" containerName="dnsmasq-dns" Feb 16 15:24:41 crc kubenswrapper[4835]: E0216 15:24:41.549355 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48c6a8d-4722-4111-8c27-82ea2f235c24" containerName="init" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.549361 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48c6a8d-4722-4111-8c27-82ea2f235c24" containerName="init" Feb 16 15:24:41 crc kubenswrapper[4835]: E0216 15:24:41.549368 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0620e270-4a0a-41b9-8e14-a9f29ce6b9aa" containerName="init" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.549373 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0620e270-4a0a-41b9-8e14-a9f29ce6b9aa" containerName="init" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.549519 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b48c6a8d-4722-4111-8c27-82ea2f235c24" containerName="dnsmasq-dns" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.549561 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="0620e270-4a0a-41b9-8e14-a9f29ce6b9aa" containerName="dnsmasq-dns" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.550177 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c06c-account-create-update-nnptn" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.552613 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.562787 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c06c-account-create-update-nnptn"] Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.576502 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-xvrzt"] Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.577571 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xvrzt" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.582971 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-xvrzt"] Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.660350 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-mnvcx"] Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.661549 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mnvcx" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.666849 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-mnvcx"] Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.732445 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-bba3-account-create-update-jb6x5"] Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.733653 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bba3-account-create-update-jb6x5" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.735101 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.739521 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bba3-account-create-update-jb6x5"] Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.742626 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4724bb12-af57-4aba-9403-07a999cde053-operator-scripts\") pod \"keystone-c06c-account-create-update-nnptn\" (UID: \"4724bb12-af57-4aba-9403-07a999cde053\") " pod="openstack/keystone-c06c-account-create-update-nnptn" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.742691 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea4a1831-012e-4c40-80ef-db237493e6ac-operator-scripts\") pod \"keystone-db-create-xvrzt\" (UID: \"ea4a1831-012e-4c40-80ef-db237493e6ac\") " pod="openstack/keystone-db-create-xvrzt" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.742724 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gt29\" (UniqueName: \"kubernetes.io/projected/ea4a1831-012e-4c40-80ef-db237493e6ac-kube-api-access-2gt29\") pod \"keystone-db-create-xvrzt\" (UID: \"ea4a1831-012e-4c40-80ef-db237493e6ac\") " pod="openstack/keystone-db-create-xvrzt" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.742779 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw9mj\" (UniqueName: \"kubernetes.io/projected/4724bb12-af57-4aba-9403-07a999cde053-kube-api-access-gw9mj\") pod \"keystone-c06c-account-create-update-nnptn\" (UID: \"4724bb12-af57-4aba-9403-07a999cde053\") " pod="openstack/keystone-c06c-account-create-update-nnptn" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.846658 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-882jr\" (UniqueName: \"kubernetes.io/projected/f6808008-de25-4d2d-8753-945ad39d27b3-kube-api-access-882jr\") pod \"placement-db-create-mnvcx\" (UID: \"f6808008-de25-4d2d-8753-945ad39d27b3\") " pod="openstack/placement-db-create-mnvcx" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.846863 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lcvk\" (UniqueName: \"kubernetes.io/projected/1109c80c-b07a-4be2-8002-cadfbbc7e0af-kube-api-access-2lcvk\") pod \"placement-bba3-account-create-update-jb6x5\" (UID: \"1109c80c-b07a-4be2-8002-cadfbbc7e0af\") " pod="openstack/placement-bba3-account-create-update-jb6x5" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.846906 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4724bb12-af57-4aba-9403-07a999cde053-operator-scripts\") pod \"keystone-c06c-account-create-update-nnptn\" (UID: \"4724bb12-af57-4aba-9403-07a999cde053\") " pod="openstack/keystone-c06c-account-create-update-nnptn" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.847110 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea4a1831-012e-4c40-80ef-db237493e6ac-operator-scripts\") pod \"keystone-db-create-xvrzt\" (UID: \"ea4a1831-012e-4c40-80ef-db237493e6ac\") " pod="openstack/keystone-db-create-xvrzt" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.847187 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gt29\" (UniqueName: \"kubernetes.io/projected/ea4a1831-012e-4c40-80ef-db237493e6ac-kube-api-access-2gt29\") pod \"keystone-db-create-xvrzt\" (UID: \"ea4a1831-012e-4c40-80ef-db237493e6ac\") " pod="openstack/keystone-db-create-xvrzt" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.847218 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6808008-de25-4d2d-8753-945ad39d27b3-operator-scripts\") pod \"placement-db-create-mnvcx\" (UID: \"f6808008-de25-4d2d-8753-945ad39d27b3\") " pod="openstack/placement-db-create-mnvcx" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.847318 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1109c80c-b07a-4be2-8002-cadfbbc7e0af-operator-scripts\") pod \"placement-bba3-account-create-update-jb6x5\" (UID: \"1109c80c-b07a-4be2-8002-cadfbbc7e0af\") " pod="openstack/placement-bba3-account-create-update-jb6x5" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.847389 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw9mj\" (UniqueName: \"kubernetes.io/projected/4724bb12-af57-4aba-9403-07a999cde053-kube-api-access-gw9mj\") pod \"keystone-c06c-account-create-update-nnptn\" (UID: \"4724bb12-af57-4aba-9403-07a999cde053\") " pod="openstack/keystone-c06c-account-create-update-nnptn" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.848035 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea4a1831-012e-4c40-80ef-db237493e6ac-operator-scripts\") pod \"keystone-db-create-xvrzt\" (UID: \"ea4a1831-012e-4c40-80ef-db237493e6ac\") " pod="openstack/keystone-db-create-xvrzt" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.849102 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4724bb12-af57-4aba-9403-07a999cde053-operator-scripts\") pod \"keystone-c06c-account-create-update-nnptn\" (UID: \"4724bb12-af57-4aba-9403-07a999cde053\") " pod="openstack/keystone-c06c-account-create-update-nnptn" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.865712 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gt29\" (UniqueName: \"kubernetes.io/projected/ea4a1831-012e-4c40-80ef-db237493e6ac-kube-api-access-2gt29\") pod \"keystone-db-create-xvrzt\" (UID: \"ea4a1831-012e-4c40-80ef-db237493e6ac\") " pod="openstack/keystone-db-create-xvrzt" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.878365 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw9mj\" (UniqueName: \"kubernetes.io/projected/4724bb12-af57-4aba-9403-07a999cde053-kube-api-access-gw9mj\") pod \"keystone-c06c-account-create-update-nnptn\" (UID: \"4724bb12-af57-4aba-9403-07a999cde053\") " pod="openstack/keystone-c06c-account-create-update-nnptn" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.899002 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xvrzt" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.949796 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-882jr\" (UniqueName: \"kubernetes.io/projected/f6808008-de25-4d2d-8753-945ad39d27b3-kube-api-access-882jr\") pod \"placement-db-create-mnvcx\" (UID: \"f6808008-de25-4d2d-8753-945ad39d27b3\") " pod="openstack/placement-db-create-mnvcx" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.949879 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lcvk\" (UniqueName: \"kubernetes.io/projected/1109c80c-b07a-4be2-8002-cadfbbc7e0af-kube-api-access-2lcvk\") pod \"placement-bba3-account-create-update-jb6x5\" (UID: \"1109c80c-b07a-4be2-8002-cadfbbc7e0af\") " pod="openstack/placement-bba3-account-create-update-jb6x5" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.950028 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6808008-de25-4d2d-8753-945ad39d27b3-operator-scripts\") pod \"placement-db-create-mnvcx\" (UID: \"f6808008-de25-4d2d-8753-945ad39d27b3\") " pod="openstack/placement-db-create-mnvcx" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.950084 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1109c80c-b07a-4be2-8002-cadfbbc7e0af-operator-scripts\") pod \"placement-bba3-account-create-update-jb6x5\" (UID: \"1109c80c-b07a-4be2-8002-cadfbbc7e0af\") " pod="openstack/placement-bba3-account-create-update-jb6x5" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.951140 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1109c80c-b07a-4be2-8002-cadfbbc7e0af-operator-scripts\") pod \"placement-bba3-account-create-update-jb6x5\" (UID: \"1109c80c-b07a-4be2-8002-cadfbbc7e0af\") " pod="openstack/placement-bba3-account-create-update-jb6x5" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.951605 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6808008-de25-4d2d-8753-945ad39d27b3-operator-scripts\") pod \"placement-db-create-mnvcx\" (UID: \"f6808008-de25-4d2d-8753-945ad39d27b3\") " pod="openstack/placement-db-create-mnvcx" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.967007 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-882jr\" (UniqueName: \"kubernetes.io/projected/f6808008-de25-4d2d-8753-945ad39d27b3-kube-api-access-882jr\") pod \"placement-db-create-mnvcx\" (UID: \"f6808008-de25-4d2d-8753-945ad39d27b3\") " pod="openstack/placement-db-create-mnvcx" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.967168 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lcvk\" (UniqueName: \"kubernetes.io/projected/1109c80c-b07a-4be2-8002-cadfbbc7e0af-kube-api-access-2lcvk\") pod \"placement-bba3-account-create-update-jb6x5\" (UID: \"1109c80c-b07a-4be2-8002-cadfbbc7e0af\") " pod="openstack/placement-bba3-account-create-update-jb6x5" Feb 16 15:24:41 crc kubenswrapper[4835]: I0216 15:24:41.983412 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mnvcx" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.061562 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bba3-account-create-update-jb6x5" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.142178 4835 generic.go:334] "Generic (PLEG): container finished" podID="02aa07ee-7fa8-40e8-bd6a-2c98dc10edda" containerID="be6c3c1bc70c253782f3f2c577f9305b7696dcc38a63022b24140a906582a59a" exitCode=0 Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.142223 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda","Type":"ContainerDied","Data":"be6c3c1bc70c253782f3f2c577f9305b7696dcc38a63022b24140a906582a59a"} Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.175716 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c06c-account-create-update-nnptn" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.683716 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-gqjb6"] Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.684254 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" podUID="25c80024-2ae0-4be9-82f5-c61fbb0ab518" containerName="dnsmasq-dns" containerID="cri-o://592c56bcf8cca97cacb4e5c9a646f7bedb8cb324577129494c52154c1dff401e" gracePeriod=10 Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.696475 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.719167 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-vfkhr"] Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.720618 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.739885 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-vfkhr"] Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.786711 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" podUID="25c80024-2ae0-4be9-82f5-c61fbb0ab518" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.851410 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-xvrzt"] Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.874345 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-dns-svc\") pod \"dnsmasq-dns-698758b865-vfkhr\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.874392 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-config\") pod \"dnsmasq-dns-698758b865-vfkhr\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.874415 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-vfkhr\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.874435 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-vfkhr\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.874488 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqg6z\" (UniqueName: \"kubernetes.io/projected/515e8879-485e-4be3-9fb9-896feb6b2d6e-kube-api-access-cqg6z\") pod \"dnsmasq-dns-698758b865-vfkhr\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.976429 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqg6z\" (UniqueName: \"kubernetes.io/projected/515e8879-485e-4be3-9fb9-896feb6b2d6e-kube-api-access-cqg6z\") pod \"dnsmasq-dns-698758b865-vfkhr\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.976828 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-dns-svc\") pod \"dnsmasq-dns-698758b865-vfkhr\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.976850 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-config\") pod \"dnsmasq-dns-698758b865-vfkhr\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.976867 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-vfkhr\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.976888 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-vfkhr\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.977704 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-vfkhr\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.978457 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-dns-svc\") pod \"dnsmasq-dns-698758b865-vfkhr\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.979264 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-vfkhr\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:42 crc kubenswrapper[4835]: I0216 15:24:42.979377 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-config\") pod \"dnsmasq-dns-698758b865-vfkhr\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.002989 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqg6z\" (UniqueName: \"kubernetes.io/projected/515e8879-485e-4be3-9fb9-896feb6b2d6e-kube-api-access-cqg6z\") pod \"dnsmasq-dns-698758b865-vfkhr\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.144910 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.158992 4835 generic.go:334] "Generic (PLEG): container finished" podID="9c673663-5be6-4ed4-b2b5-9a80e72391c6" containerID="801b9789de8f5c11c3939a5264c1048b9969650b0f2637dd6e44532e0063ad8e" exitCode=0 Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.159069 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9c673663-5be6-4ed4-b2b5-9a80e72391c6","Type":"ContainerDied","Data":"801b9789de8f5c11c3939a5264c1048b9969650b0f2637dd6e44532e0063ad8e"} Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.160656 4835 generic.go:334] "Generic (PLEG): container finished" podID="25c80024-2ae0-4be9-82f5-c61fbb0ab518" containerID="592c56bcf8cca97cacb4e5c9a646f7bedb8cb324577129494c52154c1dff401e" exitCode=0 Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.160701 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" event={"ID":"25c80024-2ae0-4be9-82f5-c61fbb0ab518","Type":"ContainerDied","Data":"592c56bcf8cca97cacb4e5c9a646f7bedb8cb324577129494c52154c1dff401e"} Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.161618 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" event={"ID":"3638d231-c31c-4620-b3e1-d45083acee56","Type":"ContainerStarted","Data":"e8851ba8c23e74fa35a3dd5ecd11babc66401f71e5256e415b2478787ed4924f"} Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.162008 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.165960 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" event={"ID":"f0df7f89-f92f-4f95-8150-5f864d8d4134","Type":"ContainerStarted","Data":"b4b480806388c8ac9f2dd83a0ae9328c42385f713edd974205f9666e80c2e822"} Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.166343 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.167371 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xvrzt" event={"ID":"ea4a1831-012e-4c40-80ef-db237493e6ac","Type":"ContainerStarted","Data":"5c3fb6fd8bb8c535d7d44408aa246650b677500194dc00ecc9f2ca6f0fd41315"} Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.180018 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"02aa07ee-7fa8-40e8-bd6a-2c98dc10edda","Type":"ContainerStarted","Data":"6d1f0a33b0deb2299d91f7ad46df0af16247fd2287010bd0fef6b606d082fd50"} Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.180288 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.187846 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" event={"ID":"15cb4b80-ac3e-407f-ac7d-b18c4f936241","Type":"ContainerStarted","Data":"0071eb66e23c0e5cbf9fc0c3a9f30a0bc8f509c6a672534fe3d23fecf6309895"} Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.188081 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.206570 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" podStartSLOduration=-9223371994.648224 podStartE2EDuration="42.20655275s" podCreationTimestamp="2026-02-16 15:24:01 +0000 UTC" firstStartedPulling="2026-02-16 15:24:09.991141879 +0000 UTC m=+999.283134774" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:24:43.2015413 +0000 UTC m=+1032.493534195" watchObservedRunningTime="2026-02-16 15:24:43.20655275 +0000 UTC m=+1032.498545645" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.222867 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" podStartSLOduration=-9223371994.633045 podStartE2EDuration="42.221730654s" podCreationTimestamp="2026-02-16 15:24:01 +0000 UTC" firstStartedPulling="2026-02-16 15:24:08.947586162 +0000 UTC m=+998.239579057" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:24:43.214275451 +0000 UTC m=+1032.506268346" watchObservedRunningTime="2026-02-16 15:24:43.221730654 +0000 UTC m=+1032.513723549" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.241441 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.252610 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.341984553 podStartE2EDuration="58.252591514s" podCreationTimestamp="2026-02-16 15:23:45 +0000 UTC" firstStartedPulling="2026-02-16 15:23:52.256364541 +0000 UTC m=+981.548357436" lastFinishedPulling="2026-02-16 15:24:08.166971502 +0000 UTC m=+997.458964397" observedRunningTime="2026-02-16 15:24:43.24202918 +0000 UTC m=+1032.534022075" watchObservedRunningTime="2026-02-16 15:24:43.252591514 +0000 UTC m=+1032.544584409" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.266819 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" podStartSLOduration=-9223371994.587976 podStartE2EDuration="42.266798932s" podCreationTimestamp="2026-02-16 15:24:01 +0000 UTC" firstStartedPulling="2026-02-16 15:24:09.990078411 +0000 UTC m=+999.282071306" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:24:43.259952835 +0000 UTC m=+1032.551945730" watchObservedRunningTime="2026-02-16 15:24:43.266798932 +0000 UTC m=+1032.558791827" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.306225 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.333428 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.344928 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.392085 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-config\") pod \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\" (UID: \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\") " Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.392195 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tr4b\" (UniqueName: \"kubernetes.io/projected/25c80024-2ae0-4be9-82f5-c61fbb0ab518-kube-api-access-6tr4b\") pod \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\" (UID: \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\") " Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.396164 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-dns-svc\") pod \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\" (UID: \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\") " Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.396203 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-ovsdbserver-sb\") pod \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\" (UID: \"25c80024-2ae0-4be9-82f5-c61fbb0ab518\") " Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.401632 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c80024-2ae0-4be9-82f5-c61fbb0ab518-kube-api-access-6tr4b" (OuterVolumeSpecName: "kube-api-access-6tr4b") pod "25c80024-2ae0-4be9-82f5-c61fbb0ab518" (UID: "25c80024-2ae0-4be9-82f5-c61fbb0ab518"). InnerVolumeSpecName "kube-api-access-6tr4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.482910 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.482944 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bba3-account-create-update-jb6x5"] Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.482962 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c06c-account-create-update-nnptn"] Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.482977 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-mnvcx"] Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.498358 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tr4b\" (UniqueName: \"kubernetes.io/projected/25c80024-2ae0-4be9-82f5-c61fbb0ab518-kube-api-access-6tr4b\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.513709 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "25c80024-2ae0-4be9-82f5-c61fbb0ab518" (UID: "25c80024-2ae0-4be9-82f5-c61fbb0ab518"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.518167 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25c80024-2ae0-4be9-82f5-c61fbb0ab518" (UID: "25c80024-2ae0-4be9-82f5-c61fbb0ab518"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.525936 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-config" (OuterVolumeSpecName: "config") pod "25c80024-2ae0-4be9-82f5-c61fbb0ab518" (UID: "25c80024-2ae0-4be9-82f5-c61fbb0ab518"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.600502 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.600543 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.600554 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25c80024-2ae0-4be9-82f5-c61fbb0ab518-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.789150 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 15:24:43 crc kubenswrapper[4835]: E0216 15:24:43.789489 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25c80024-2ae0-4be9-82f5-c61fbb0ab518" containerName="init" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.789501 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="25c80024-2ae0-4be9-82f5-c61fbb0ab518" containerName="init" Feb 16 15:24:43 crc kubenswrapper[4835]: E0216 15:24:43.789546 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25c80024-2ae0-4be9-82f5-c61fbb0ab518" containerName="dnsmasq-dns" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.789552 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="25c80024-2ae0-4be9-82f5-c61fbb0ab518" containerName="dnsmasq-dns" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.789707 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="25c80024-2ae0-4be9-82f5-c61fbb0ab518" containerName="dnsmasq-dns" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.794999 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.798793 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.798950 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.799811 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.799983 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-49k4z" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.800668 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-vfkhr"] Feb 16 15:24:43 crc kubenswrapper[4835]: W0216 15:24:43.803217 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod515e8879_485e_4be3_9fb9_896feb6b2d6e.slice/crio-a61fe65810747c7a3708e6134144ff02658ce2b5021e8e1a5a39ba4cad6cd757 WatchSource:0}: Error finding container a61fe65810747c7a3708e6134144ff02658ce2b5021e8e1a5a39ba4cad6cd757: Status 404 returned error can't find the container with id a61fe65810747c7a3708e6134144ff02658ce2b5021e8e1a5a39ba4cad6cd757 Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.814371 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.904413 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ca92c6-cb91-49f1-a005-047759f93742-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.904672 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/89ca92c6-cb91-49f1-a005-047759f93742-cache\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.904699 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/89ca92c6-cb91-49f1-a005-047759f93742-lock\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.904771 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.904830 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qxgl\" (UniqueName: \"kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-kube-api-access-5qxgl\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:43 crc kubenswrapper[4835]: I0216 15:24:43.904935 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3783dbf9-23f7-4dc4-8688-6002a4dc2879\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3783dbf9-23f7-4dc4-8688-6002a4dc2879\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.005985 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3783dbf9-23f7-4dc4-8688-6002a4dc2879\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3783dbf9-23f7-4dc4-8688-6002a4dc2879\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.006054 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ca92c6-cb91-49f1-a005-047759f93742-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.006072 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/89ca92c6-cb91-49f1-a005-047759f93742-cache\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.006095 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/89ca92c6-cb91-49f1-a005-047759f93742-lock\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.006151 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.006174 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qxgl\" (UniqueName: \"kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-kube-api-access-5qxgl\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:44 crc kubenswrapper[4835]: E0216 15:24:44.006868 4835 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:24:44 crc kubenswrapper[4835]: E0216 15:24:44.006884 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:24:44 crc kubenswrapper[4835]: E0216 15:24:44.006918 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift podName:89ca92c6-cb91-49f1-a005-047759f93742 nodeName:}" failed. No retries permitted until 2026-02-16 15:24:44.506906852 +0000 UTC m=+1033.798899747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift") pod "swift-storage-0" (UID: "89ca92c6-cb91-49f1-a005-047759f93742") : configmap "swift-ring-files" not found Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.007716 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/89ca92c6-cb91-49f1-a005-047759f93742-cache\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.007786 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/89ca92c6-cb91-49f1-a005-047759f93742-lock\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.011033 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89ca92c6-cb91-49f1-a005-047759f93742-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.018280 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.018318 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3783dbf9-23f7-4dc4-8688-6002a4dc2879\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3783dbf9-23f7-4dc4-8688-6002a4dc2879\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/32bc12c1caf1ea397373dfa41bcb87bfc0d25e3412ff99064f55cbaea84fc715/globalmount\"" pod="openstack/swift-storage-0" Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.023199 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qxgl\" (UniqueName: \"kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-kube-api-access-5qxgl\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.061402 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3783dbf9-23f7-4dc4-8688-6002a4dc2879\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3783dbf9-23f7-4dc4-8688-6002a4dc2879\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.196346 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mnvcx" event={"ID":"f6808008-de25-4d2d-8753-945ad39d27b3","Type":"ContainerStarted","Data":"a47b8b1d2f7e8668b5f3e5e509c409a592b324ad4f8aeb74d15ce09921755001"} Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.197388 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bba3-account-create-update-jb6x5" event={"ID":"1109c80c-b07a-4be2-8002-cadfbbc7e0af","Type":"ContainerStarted","Data":"f67cefe7bcf19c2cc7d77dd3f7aca5520ae58add04d81b647e6394b46e29eb38"} Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.199325 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" event={"ID":"25c80024-2ae0-4be9-82f5-c61fbb0ab518","Type":"ContainerDied","Data":"29cd260a8a6163fb0621aecd2a9e30e27bb784ecf5e8e3273d6b2eb88c921862"} Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.199376 4835 scope.go:117] "RemoveContainer" containerID="592c56bcf8cca97cacb4e5c9a646f7bedb8cb324577129494c52154c1dff401e" Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.199547 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-gqjb6" Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.200856 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c06c-account-create-update-nnptn" event={"ID":"4724bb12-af57-4aba-9403-07a999cde053","Type":"ContainerStarted","Data":"c7904c5cc4c9b9eeb5a71093377d0d377c27b7011bd2ede6a3e35c8c3e22abe2"} Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.202118 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-vfkhr" event={"ID":"515e8879-485e-4be3-9fb9-896feb6b2d6e","Type":"ContainerStarted","Data":"a61fe65810747c7a3708e6134144ff02658ce2b5021e8e1a5a39ba4cad6cd757"} Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.217171 4835 scope.go:117] "RemoveContainer" containerID="1f6781b7fe8fab18c4c056bf145c5a7b7ded309497fe0f5918e03a3c79355a34" Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.234787 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-gqjb6"] Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.243140 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-gqjb6"] Feb 16 15:24:44 crc kubenswrapper[4835]: I0216 15:24:44.515163 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:44 crc kubenswrapper[4835]: E0216 15:24:44.515335 4835 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:24:44 crc kubenswrapper[4835]: E0216 15:24:44.515544 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:24:44 crc kubenswrapper[4835]: E0216 15:24:44.515594 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift podName:89ca92c6-cb91-49f1-a005-047759f93742 nodeName:}" failed. No retries permitted until 2026-02-16 15:24:45.51558045 +0000 UTC m=+1034.807573345 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift") pod "swift-storage-0" (UID: "89ca92c6-cb91-49f1-a005-047759f93742") : configmap "swift-ring-files" not found Feb 16 15:24:45 crc kubenswrapper[4835]: E0216 15:24:45.126502 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod515e8879_485e_4be3_9fb9_896feb6b2d6e.slice/crio-conmon-5eecb0a7dbd8c34ac2b5552bb09b5435e07f4e51ab6c405f6433be0417f26073.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod515e8879_485e_4be3_9fb9_896feb6b2d6e.slice/crio-5eecb0a7dbd8c34ac2b5552bb09b5435e07f4e51ab6c405f6433be0417f26073.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.232797 4835 generic.go:334] "Generic (PLEG): container finished" podID="515e8879-485e-4be3-9fb9-896feb6b2d6e" containerID="5eecb0a7dbd8c34ac2b5552bb09b5435e07f4e51ab6c405f6433be0417f26073" exitCode=0 Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.232860 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-vfkhr" event={"ID":"515e8879-485e-4be3-9fb9-896feb6b2d6e","Type":"ContainerDied","Data":"5eecb0a7dbd8c34ac2b5552bb09b5435e07f4e51ab6c405f6433be0417f26073"} Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.257464 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c42ab514-0d06-4182-9ef7-6bcd9fb2afd8","Type":"ContainerStarted","Data":"5c584de22b259e16ac23ec2a56270cdd7c40e720e55be2c095696a705086a543"} Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.264320 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"23555de7-4851-4730-b8b3-9d788622420a","Type":"ContainerStarted","Data":"903967acfeff458df305ff38aca6ff8264a86169dd3fa69dc5db942aa1e8ba20"} Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.269921 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"117011cd-1ad8-4aff-b5d4-49bce3381f02","Type":"ContainerStarted","Data":"0bdae44239bea6763e5e8c6d3b9acb4c41c127b25e622e89284f7da3374d5f42"} Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.270715 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.276928 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mnvcx" event={"ID":"f6808008-de25-4d2d-8753-945ad39d27b3","Type":"ContainerStarted","Data":"eb2f498c27e61160d5de9931604f1273d6cca14d697ee0f28d6c2d6a42bc7146"} Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.282901 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bba3-account-create-update-jb6x5" event={"ID":"1109c80c-b07a-4be2-8002-cadfbbc7e0af","Type":"ContainerStarted","Data":"282a2bfc39068383fa47e81ecc2dd3123f81857610b5bb328a7b21207e34ec53"} Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.285001 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c06c-account-create-update-nnptn" event={"ID":"4724bb12-af57-4aba-9403-07a999cde053","Type":"ContainerStarted","Data":"84b81b2535b7a5b5a6c602923069cf26d31a5f30de8a4978b0dfaa6294cddfeb"} Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.288020 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xvrzt" event={"ID":"ea4a1831-012e-4c40-80ef-db237493e6ac","Type":"ContainerStarted","Data":"0254adea66119980500110bf5476b48de449be7a5fd5ccd66670e15f41c0d8da"} Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.294265 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9c673663-5be6-4ed4-b2b5-9a80e72391c6","Type":"ContainerStarted","Data":"f2a546ca957c78c041b24e9650ca0fdeac88dc0795f7140517eee29352b2709e"} Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.295625 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.304095 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e2800ecb-4ec5-4930-a820-d9680894ad21","Type":"ContainerStarted","Data":"7fe625906ae9bc94a2919299e32c02960c711d849f6b00249e1dc090fe97af11"} Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.307076 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kxl4t" event={"ID":"a510fbea-dfa0-48e9-9557-a9e7f75cae9a","Type":"ContainerStarted","Data":"ab6b90bd8faaaa9733c49de15c606699022a45c8ef6aefaceae2a1272573e822"} Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.367316 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-mnvcx" podStartSLOduration=4.367297314 podStartE2EDuration="4.367297314s" podCreationTimestamp="2026-02-16 15:24:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:24:45.318434157 +0000 UTC m=+1034.610427052" watchObservedRunningTime="2026-02-16 15:24:45.367297314 +0000 UTC m=+1034.659290229" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.384546 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=20.699480729 podStartE2EDuration="53.38451238s" podCreationTimestamp="2026-02-16 15:23:52 +0000 UTC" firstStartedPulling="2026-02-16 15:24:09.59664084 +0000 UTC m=+998.888633735" lastFinishedPulling="2026-02-16 15:24:42.281672491 +0000 UTC m=+1031.573665386" observedRunningTime="2026-02-16 15:24:45.375354982 +0000 UTC m=+1034.667347877" watchObservedRunningTime="2026-02-16 15:24:45.38451238 +0000 UTC m=+1034.676505275" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.417286 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=17.85380719 podStartE2EDuration="50.417232888s" podCreationTimestamp="2026-02-16 15:23:55 +0000 UTC" firstStartedPulling="2026-02-16 15:24:09.727993566 +0000 UTC m=+999.019986461" lastFinishedPulling="2026-02-16 15:24:42.291419264 +0000 UTC m=+1031.583412159" observedRunningTime="2026-02-16 15:24:45.406314595 +0000 UTC m=+1034.698307500" watchObservedRunningTime="2026-02-16 15:24:45.417232888 +0000 UTC m=+1034.709225783" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.425289 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25c80024-2ae0-4be9-82f5-c61fbb0ab518" path="/var/lib/kubelet/pods/25c80024-2ae0-4be9-82f5-c61fbb0ab518/volumes" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.425619 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-xvrzt" podStartSLOduration=4.425602425 podStartE2EDuration="4.425602425s" podCreationTimestamp="2026-02-16 15:24:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:24:45.424158798 +0000 UTC m=+1034.716151703" watchObservedRunningTime="2026-02-16 15:24:45.425602425 +0000 UTC m=+1034.717595320" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.456497 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=59.456482006 podStartE2EDuration="59.456482006s" podCreationTimestamp="2026-02-16 15:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:24:45.445115341 +0000 UTC m=+1034.737108246" watchObservedRunningTime="2026-02-16 15:24:45.456482006 +0000 UTC m=+1034.748474901" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.482153 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-bba3-account-create-update-jb6x5" podStartSLOduration=4.482135161 podStartE2EDuration="4.482135161s" podCreationTimestamp="2026-02-16 15:24:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:24:45.477041569 +0000 UTC m=+1034.769034464" watchObservedRunningTime="2026-02-16 15:24:45.482135161 +0000 UTC m=+1034.774128056" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.501333 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c06c-account-create-update-nnptn" podStartSLOduration=4.501312568 podStartE2EDuration="4.501312568s" podCreationTimestamp="2026-02-16 15:24:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:24:45.496055712 +0000 UTC m=+1034.788048607" watchObservedRunningTime="2026-02-16 15:24:45.501312568 +0000 UTC m=+1034.793305463" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.543243 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:45 crc kubenswrapper[4835]: E0216 15:24:45.543448 4835 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:24:45 crc kubenswrapper[4835]: E0216 15:24:45.543636 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:24:45 crc kubenswrapper[4835]: E0216 15:24:45.543748 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift podName:89ca92c6-cb91-49f1-a005-047759f93742 nodeName:}" failed. No retries permitted until 2026-02-16 15:24:47.543732158 +0000 UTC m=+1036.835725043 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift") pod "swift-storage-0" (UID: "89ca92c6-cb91-49f1-a005-047759f93742") : configmap "swift-ring-files" not found Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.604941 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-z5tpt"] Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.606073 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z5tpt" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.614462 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-z5tpt"] Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.718748 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1ee6-account-create-update-h7mfh"] Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.719804 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1ee6-account-create-update-h7mfh" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.722361 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.734363 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1ee6-account-create-update-h7mfh"] Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.746404 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95777ba-e0bd-4163-a3b4-7cfc9271a946-operator-scripts\") pod \"glance-db-create-z5tpt\" (UID: \"e95777ba-e0bd-4163-a3b4-7cfc9271a946\") " pod="openstack/glance-db-create-z5tpt" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.746490 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz94l\" (UniqueName: \"kubernetes.io/projected/e95777ba-e0bd-4163-a3b4-7cfc9271a946-kube-api-access-dz94l\") pod \"glance-db-create-z5tpt\" (UID: \"e95777ba-e0bd-4163-a3b4-7cfc9271a946\") " pod="openstack/glance-db-create-z5tpt" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.848098 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdtlj\" (UniqueName: \"kubernetes.io/projected/578241de-b081-44dc-bab5-3ddfba91c2df-kube-api-access-qdtlj\") pod \"glance-1ee6-account-create-update-h7mfh\" (UID: \"578241de-b081-44dc-bab5-3ddfba91c2df\") " pod="openstack/glance-1ee6-account-create-update-h7mfh" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.848415 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/578241de-b081-44dc-bab5-3ddfba91c2df-operator-scripts\") pod \"glance-1ee6-account-create-update-h7mfh\" (UID: \"578241de-b081-44dc-bab5-3ddfba91c2df\") " pod="openstack/glance-1ee6-account-create-update-h7mfh" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.848563 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95777ba-e0bd-4163-a3b4-7cfc9271a946-operator-scripts\") pod \"glance-db-create-z5tpt\" (UID: \"e95777ba-e0bd-4163-a3b4-7cfc9271a946\") " pod="openstack/glance-db-create-z5tpt" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.848746 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz94l\" (UniqueName: \"kubernetes.io/projected/e95777ba-e0bd-4163-a3b4-7cfc9271a946-kube-api-access-dz94l\") pod \"glance-db-create-z5tpt\" (UID: \"e95777ba-e0bd-4163-a3b4-7cfc9271a946\") " pod="openstack/glance-db-create-z5tpt" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.849457 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95777ba-e0bd-4163-a3b4-7cfc9271a946-operator-scripts\") pod \"glance-db-create-z5tpt\" (UID: \"e95777ba-e0bd-4163-a3b4-7cfc9271a946\") " pod="openstack/glance-db-create-z5tpt" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.865044 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz94l\" (UniqueName: \"kubernetes.io/projected/e95777ba-e0bd-4163-a3b4-7cfc9271a946-kube-api-access-dz94l\") pod \"glance-db-create-z5tpt\" (UID: \"e95777ba-e0bd-4163-a3b4-7cfc9271a946\") " pod="openstack/glance-db-create-z5tpt" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.951377 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdtlj\" (UniqueName: \"kubernetes.io/projected/578241de-b081-44dc-bab5-3ddfba91c2df-kube-api-access-qdtlj\") pod \"glance-1ee6-account-create-update-h7mfh\" (UID: \"578241de-b081-44dc-bab5-3ddfba91c2df\") " pod="openstack/glance-1ee6-account-create-update-h7mfh" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.951576 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/578241de-b081-44dc-bab5-3ddfba91c2df-operator-scripts\") pod \"glance-1ee6-account-create-update-h7mfh\" (UID: \"578241de-b081-44dc-bab5-3ddfba91c2df\") " pod="openstack/glance-1ee6-account-create-update-h7mfh" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.952480 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/578241de-b081-44dc-bab5-3ddfba91c2df-operator-scripts\") pod \"glance-1ee6-account-create-update-h7mfh\" (UID: \"578241de-b081-44dc-bab5-3ddfba91c2df\") " pod="openstack/glance-1ee6-account-create-update-h7mfh" Feb 16 15:24:45 crc kubenswrapper[4835]: I0216 15:24:45.965204 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z5tpt" Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.038636 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdtlj\" (UniqueName: \"kubernetes.io/projected/578241de-b081-44dc-bab5-3ddfba91c2df-kube-api-access-qdtlj\") pod \"glance-1ee6-account-create-update-h7mfh\" (UID: \"578241de-b081-44dc-bab5-3ddfba91c2df\") " pod="openstack/glance-1ee6-account-create-update-h7mfh" Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.091193 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1ee6-account-create-update-h7mfh" Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.316012 4835 generic.go:334] "Generic (PLEG): container finished" podID="4724bb12-af57-4aba-9403-07a999cde053" containerID="84b81b2535b7a5b5a6c602923069cf26d31a5f30de8a4978b0dfaa6294cddfeb" exitCode=0 Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.316024 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c06c-account-create-update-nnptn" event={"ID":"4724bb12-af57-4aba-9403-07a999cde053","Type":"ContainerDied","Data":"84b81b2535b7a5b5a6c602923069cf26d31a5f30de8a4978b0dfaa6294cddfeb"} Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.318222 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-vfkhr" event={"ID":"515e8879-485e-4be3-9fb9-896feb6b2d6e","Type":"ContainerStarted","Data":"7b55d02f3ffc4e1ca70e5a68825e2b80e2c7602f11225ba8e35995d419189264"} Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.318349 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.319253 4835 generic.go:334] "Generic (PLEG): container finished" podID="ea4a1831-012e-4c40-80ef-db237493e6ac" containerID="0254adea66119980500110bf5476b48de449be7a5fd5ccd66670e15f41c0d8da" exitCode=0 Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.319299 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xvrzt" event={"ID":"ea4a1831-012e-4c40-80ef-db237493e6ac","Type":"ContainerDied","Data":"0254adea66119980500110bf5476b48de449be7a5fd5ccd66670e15f41c0d8da"} Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.322212 4835 generic.go:334] "Generic (PLEG): container finished" podID="a510fbea-dfa0-48e9-9557-a9e7f75cae9a" containerID="ab6b90bd8faaaa9733c49de15c606699022a45c8ef6aefaceae2a1272573e822" exitCode=0 Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.322304 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kxl4t" event={"ID":"a510fbea-dfa0-48e9-9557-a9e7f75cae9a","Type":"ContainerDied","Data":"ab6b90bd8faaaa9733c49de15c606699022a45c8ef6aefaceae2a1272573e822"} Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.323760 4835 generic.go:334] "Generic (PLEG): container finished" podID="f6808008-de25-4d2d-8753-945ad39d27b3" containerID="eb2f498c27e61160d5de9931604f1273d6cca14d697ee0f28d6c2d6a42bc7146" exitCode=0 Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.323813 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mnvcx" event={"ID":"f6808008-de25-4d2d-8753-945ad39d27b3","Type":"ContainerDied","Data":"eb2f498c27e61160d5de9931604f1273d6cca14d697ee0f28d6c2d6a42bc7146"} Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.327047 4835 generic.go:334] "Generic (PLEG): container finished" podID="1109c80c-b07a-4be2-8002-cadfbbc7e0af" containerID="282a2bfc39068383fa47e81ecc2dd3123f81857610b5bb328a7b21207e34ec53" exitCode=0 Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.327124 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bba3-account-create-update-jb6x5" event={"ID":"1109c80c-b07a-4be2-8002-cadfbbc7e0af","Type":"ContainerDied","Data":"282a2bfc39068383fa47e81ecc2dd3123f81857610b5bb328a7b21207e34ec53"} Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.455939 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-vfkhr" podStartSLOduration=4.455920899 podStartE2EDuration="4.455920899s" podCreationTimestamp="2026-02-16 15:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:24:46.42933999 +0000 UTC m=+1035.721332885" watchObservedRunningTime="2026-02-16 15:24:46.455920899 +0000 UTC m=+1035.747913794" Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.537780 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-z5tpt"] Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.661368 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1ee6-account-create-update-h7mfh"] Feb 16 15:24:46 crc kubenswrapper[4835]: I0216 15:24:46.670678 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 15:24:46 crc kubenswrapper[4835]: W0216 15:24:46.678877 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod578241de_b081_44dc_bab5_3ddfba91c2df.slice/crio-056220fdb75812aa58abf103439f1d6bda6ceb11bbbd0567accc601b809df860 WatchSource:0}: Error finding container 056220fdb75812aa58abf103439f1d6bda6ceb11bbbd0567accc601b809df860: Status 404 returned error can't find the container with id 056220fdb75812aa58abf103439f1d6bda6ceb11bbbd0567accc601b809df860 Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.347457 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kxl4t" event={"ID":"a510fbea-dfa0-48e9-9557-a9e7f75cae9a","Type":"ContainerStarted","Data":"09c8db5b4dcc680b5a9f0771e487038b2cf7e081a69caade4f4db8862a41c3c2"} Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.347848 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kxl4t" event={"ID":"a510fbea-dfa0-48e9-9557-a9e7f75cae9a","Type":"ContainerStarted","Data":"932baf3fcdfab109f464e341e2cafa261517a5bce9f5026680752f6cd5f4b2a6"} Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.349631 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.349693 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.355090 4835 generic.go:334] "Generic (PLEG): container finished" podID="578241de-b081-44dc-bab5-3ddfba91c2df" containerID="c7dc7e3fbb1fd3e16e1606b6e6cdb2afb1325e083941874321988bc47b0e6233" exitCode=0 Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.355222 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1ee6-account-create-update-h7mfh" event={"ID":"578241de-b081-44dc-bab5-3ddfba91c2df","Type":"ContainerDied","Data":"c7dc7e3fbb1fd3e16e1606b6e6cdb2afb1325e083941874321988bc47b0e6233"} Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.355374 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1ee6-account-create-update-h7mfh" event={"ID":"578241de-b081-44dc-bab5-3ddfba91c2df","Type":"ContainerStarted","Data":"056220fdb75812aa58abf103439f1d6bda6ceb11bbbd0567accc601b809df860"} Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.361554 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"23555de7-4851-4730-b8b3-9d788622420a","Type":"ContainerStarted","Data":"f3fde4d7ad5aae8f0f8cacfb6b4d7bbe11ac53cf6fbd26cbede2db8695411bcc"} Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.361802 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.366390 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e2800ecb-4ec5-4930-a820-d9680894ad21","Type":"ContainerStarted","Data":"9d23443d6e0d527badad6636d6f2c9bfc145e4f8471aaf2183e464f1930f93b2"} Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.367965 4835 generic.go:334] "Generic (PLEG): container finished" podID="e95777ba-e0bd-4163-a3b4-7cfc9271a946" containerID="1f52696865448b33775d5bb0cd992b560e790bc769264ff332657abeffc58e3e" exitCode=0 Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.367999 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z5tpt" event={"ID":"e95777ba-e0bd-4163-a3b4-7cfc9271a946","Type":"ContainerDied","Data":"1f52696865448b33775d5bb0cd992b560e790bc769264ff332657abeffc58e3e"} Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.368038 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z5tpt" event={"ID":"e95777ba-e0bd-4163-a3b4-7cfc9271a946","Type":"ContainerStarted","Data":"b9542e652dbbac9c75cd94ead703f18da04d68a87fbe3d873eaafaedffb3c293"} Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.376400 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.376843 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-kxl4t" podStartSLOduration=19.994926463 podStartE2EDuration="52.376829655s" podCreationTimestamp="2026-02-16 15:23:55 +0000 UTC" firstStartedPulling="2026-02-16 15:24:09.83654538 +0000 UTC m=+999.128538275" lastFinishedPulling="2026-02-16 15:24:42.218448572 +0000 UTC m=+1031.510441467" observedRunningTime="2026-02-16 15:24:47.37160094 +0000 UTC m=+1036.663593855" watchObservedRunningTime="2026-02-16 15:24:47.376829655 +0000 UTC m=+1036.668822550" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.464061 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=22.772574339 podStartE2EDuration="55.464031736s" podCreationTimestamp="2026-02-16 15:23:52 +0000 UTC" firstStartedPulling="2026-02-16 15:24:09.592108223 +0000 UTC m=+998.884101118" lastFinishedPulling="2026-02-16 15:24:42.28356562 +0000 UTC m=+1031.575558515" observedRunningTime="2026-02-16 15:24:47.428016942 +0000 UTC m=+1036.720009837" watchObservedRunningTime="2026-02-16 15:24:47.464031736 +0000 UTC m=+1036.756024651" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.568113 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hm6g4"] Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.569490 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hm6g4" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.573787 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.590881 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:47 crc kubenswrapper[4835]: E0216 15:24:47.591113 4835 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:24:47 crc kubenswrapper[4835]: E0216 15:24:47.591129 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:24:47 crc kubenswrapper[4835]: E0216 15:24:47.591170 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift podName:89ca92c6-cb91-49f1-a005-047759f93742 nodeName:}" failed. No retries permitted until 2026-02-16 15:24:51.591155182 +0000 UTC m=+1040.883148077 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift") pod "swift-storage-0" (UID: "89ca92c6-cb91-49f1-a005-047759f93742") : configmap "swift-ring-files" not found Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.595631 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hm6g4"] Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.671360 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.693649 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-877rw\" (UniqueName: \"kubernetes.io/projected/bcd56cf6-8d38-4399-9bbe-e69e57255cca-kube-api-access-877rw\") pod \"root-account-create-update-hm6g4\" (UID: \"bcd56cf6-8d38-4399-9bbe-e69e57255cca\") " pod="openstack/root-account-create-update-hm6g4" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.693722 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd56cf6-8d38-4399-9bbe-e69e57255cca-operator-scripts\") pod \"root-account-create-update-hm6g4\" (UID: \"bcd56cf6-8d38-4399-9bbe-e69e57255cca\") " pod="openstack/root-account-create-update-hm6g4" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.732164 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.759985 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-rckvw"] Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.761302 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.765202 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.765386 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.775988 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.780664 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rckvw"] Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.795475 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-877rw\" (UniqueName: \"kubernetes.io/projected/bcd56cf6-8d38-4399-9bbe-e69e57255cca-kube-api-access-877rw\") pod \"root-account-create-update-hm6g4\" (UID: \"bcd56cf6-8d38-4399-9bbe-e69e57255cca\") " pod="openstack/root-account-create-update-hm6g4" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.795554 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd56cf6-8d38-4399-9bbe-e69e57255cca-operator-scripts\") pod \"root-account-create-update-hm6g4\" (UID: \"bcd56cf6-8d38-4399-9bbe-e69e57255cca\") " pod="openstack/root-account-create-update-hm6g4" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.796841 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd56cf6-8d38-4399-9bbe-e69e57255cca-operator-scripts\") pod \"root-account-create-update-hm6g4\" (UID: \"bcd56cf6-8d38-4399-9bbe-e69e57255cca\") " pod="openstack/root-account-create-update-hm6g4" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.831061 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-877rw\" (UniqueName: \"kubernetes.io/projected/bcd56cf6-8d38-4399-9bbe-e69e57255cca-kube-api-access-877rw\") pod \"root-account-create-update-hm6g4\" (UID: \"bcd56cf6-8d38-4399-9bbe-e69e57255cca\") " pod="openstack/root-account-create-update-hm6g4" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.894918 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hm6g4" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.896455 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqf42\" (UniqueName: \"kubernetes.io/projected/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-kube-api-access-kqf42\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.896550 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-swiftconf\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.896590 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-ring-data-devices\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.896650 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-scripts\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.896671 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-etc-swift\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.896708 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-combined-ca-bundle\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:47 crc kubenswrapper[4835]: I0216 15:24:47.896740 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-dispersionconf\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:47.997837 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-swiftconf\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:47.998161 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-ring-data-devices\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:47.998207 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-scripts\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:47.998225 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-etc-swift\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:47.998244 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-combined-ca-bundle\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:47.998286 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-dispersionconf\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:47.998365 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqf42\" (UniqueName: \"kubernetes.io/projected/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-kube-api-access-kqf42\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.000407 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-ring-data-devices\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.000621 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-etc-swift\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.000658 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-scripts\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.004221 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-dispersionconf\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.007301 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-combined-ca-bundle\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.007373 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-swiftconf\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.013577 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqf42\" (UniqueName: \"kubernetes.io/projected/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-kube-api-access-kqf42\") pod \"swift-ring-rebalance-rckvw\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.091995 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.135834 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xvrzt" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.154972 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bba3-account-create-update-jb6x5" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.168342 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mnvcx" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.175878 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c06c-account-create-update-nnptn" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.303243 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gt29\" (UniqueName: \"kubernetes.io/projected/ea4a1831-012e-4c40-80ef-db237493e6ac-kube-api-access-2gt29\") pod \"ea4a1831-012e-4c40-80ef-db237493e6ac\" (UID: \"ea4a1831-012e-4c40-80ef-db237493e6ac\") " Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.303281 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea4a1831-012e-4c40-80ef-db237493e6ac-operator-scripts\") pod \"ea4a1831-012e-4c40-80ef-db237493e6ac\" (UID: \"ea4a1831-012e-4c40-80ef-db237493e6ac\") " Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.303312 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw9mj\" (UniqueName: \"kubernetes.io/projected/4724bb12-af57-4aba-9403-07a999cde053-kube-api-access-gw9mj\") pod \"4724bb12-af57-4aba-9403-07a999cde053\" (UID: \"4724bb12-af57-4aba-9403-07a999cde053\") " Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.303347 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-882jr\" (UniqueName: \"kubernetes.io/projected/f6808008-de25-4d2d-8753-945ad39d27b3-kube-api-access-882jr\") pod \"f6808008-de25-4d2d-8753-945ad39d27b3\" (UID: \"f6808008-de25-4d2d-8753-945ad39d27b3\") " Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.303418 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6808008-de25-4d2d-8753-945ad39d27b3-operator-scripts\") pod \"f6808008-de25-4d2d-8753-945ad39d27b3\" (UID: \"f6808008-de25-4d2d-8753-945ad39d27b3\") " Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.303442 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4724bb12-af57-4aba-9403-07a999cde053-operator-scripts\") pod \"4724bb12-af57-4aba-9403-07a999cde053\" (UID: \"4724bb12-af57-4aba-9403-07a999cde053\") " Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.303464 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1109c80c-b07a-4be2-8002-cadfbbc7e0af-operator-scripts\") pod \"1109c80c-b07a-4be2-8002-cadfbbc7e0af\" (UID: \"1109c80c-b07a-4be2-8002-cadfbbc7e0af\") " Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.303520 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lcvk\" (UniqueName: \"kubernetes.io/projected/1109c80c-b07a-4be2-8002-cadfbbc7e0af-kube-api-access-2lcvk\") pod \"1109c80c-b07a-4be2-8002-cadfbbc7e0af\" (UID: \"1109c80c-b07a-4be2-8002-cadfbbc7e0af\") " Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.305815 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1109c80c-b07a-4be2-8002-cadfbbc7e0af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1109c80c-b07a-4be2-8002-cadfbbc7e0af" (UID: "1109c80c-b07a-4be2-8002-cadfbbc7e0af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.305851 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea4a1831-012e-4c40-80ef-db237493e6ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea4a1831-012e-4c40-80ef-db237493e6ac" (UID: "ea4a1831-012e-4c40-80ef-db237493e6ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.305857 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6808008-de25-4d2d-8753-945ad39d27b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6808008-de25-4d2d-8753-945ad39d27b3" (UID: "f6808008-de25-4d2d-8753-945ad39d27b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.306147 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4724bb12-af57-4aba-9403-07a999cde053-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4724bb12-af57-4aba-9403-07a999cde053" (UID: "4724bb12-af57-4aba-9403-07a999cde053"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.308480 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1109c80c-b07a-4be2-8002-cadfbbc7e0af-kube-api-access-2lcvk" (OuterVolumeSpecName: "kube-api-access-2lcvk") pod "1109c80c-b07a-4be2-8002-cadfbbc7e0af" (UID: "1109c80c-b07a-4be2-8002-cadfbbc7e0af"). InnerVolumeSpecName "kube-api-access-2lcvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.308919 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6808008-de25-4d2d-8753-945ad39d27b3-kube-api-access-882jr" (OuterVolumeSpecName: "kube-api-access-882jr") pod "f6808008-de25-4d2d-8753-945ad39d27b3" (UID: "f6808008-de25-4d2d-8753-945ad39d27b3"). InnerVolumeSpecName "kube-api-access-882jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.310079 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4724bb12-af57-4aba-9403-07a999cde053-kube-api-access-gw9mj" (OuterVolumeSpecName: "kube-api-access-gw9mj") pod "4724bb12-af57-4aba-9403-07a999cde053" (UID: "4724bb12-af57-4aba-9403-07a999cde053"). InnerVolumeSpecName "kube-api-access-gw9mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.312732 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea4a1831-012e-4c40-80ef-db237493e6ac-kube-api-access-2gt29" (OuterVolumeSpecName: "kube-api-access-2gt29") pod "ea4a1831-012e-4c40-80ef-db237493e6ac" (UID: "ea4a1831-012e-4c40-80ef-db237493e6ac"). InnerVolumeSpecName "kube-api-access-2gt29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.381898 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bba3-account-create-update-jb6x5" event={"ID":"1109c80c-b07a-4be2-8002-cadfbbc7e0af","Type":"ContainerDied","Data":"f67cefe7bcf19c2cc7d77dd3f7aca5520ae58add04d81b647e6394b46e29eb38"} Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.381933 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bba3-account-create-update-jb6x5" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.381938 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f67cefe7bcf19c2cc7d77dd3f7aca5520ae58add04d81b647e6394b46e29eb38" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.384400 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c06c-account-create-update-nnptn" event={"ID":"4724bb12-af57-4aba-9403-07a999cde053","Type":"ContainerDied","Data":"c7904c5cc4c9b9eeb5a71093377d0d377c27b7011bd2ede6a3e35c8c3e22abe2"} Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.384415 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7904c5cc4c9b9eeb5a71093377d0d377c27b7011bd2ede6a3e35c8c3e22abe2" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.384459 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c06c-account-create-update-nnptn" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.388283 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xvrzt" event={"ID":"ea4a1831-012e-4c40-80ef-db237493e6ac","Type":"ContainerDied","Data":"5c3fb6fd8bb8c535d7d44408aa246650b677500194dc00ecc9f2ca6f0fd41315"} Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.388319 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c3fb6fd8bb8c535d7d44408aa246650b677500194dc00ecc9f2ca6f0fd41315" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.388369 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xvrzt" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.400805 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mnvcx" event={"ID":"f6808008-de25-4d2d-8753-945ad39d27b3","Type":"ContainerDied","Data":"a47b8b1d2f7e8668b5f3e5e509c409a592b324ad4f8aeb74d15ce09921755001"} Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.400849 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a47b8b1d2f7e8668b5f3e5e509c409a592b324ad4f8aeb74d15ce09921755001" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.401219 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mnvcx" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.406458 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lcvk\" (UniqueName: \"kubernetes.io/projected/1109c80c-b07a-4be2-8002-cadfbbc7e0af-kube-api-access-2lcvk\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.406482 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gt29\" (UniqueName: \"kubernetes.io/projected/ea4a1831-012e-4c40-80ef-db237493e6ac-kube-api-access-2gt29\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.406495 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea4a1831-012e-4c40-80ef-db237493e6ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.406505 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw9mj\" (UniqueName: \"kubernetes.io/projected/4724bb12-af57-4aba-9403-07a999cde053-kube-api-access-gw9mj\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.406514 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-882jr\" (UniqueName: \"kubernetes.io/projected/f6808008-de25-4d2d-8753-945ad39d27b3-kube-api-access-882jr\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.406536 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6808008-de25-4d2d-8753-945ad39d27b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.406544 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4724bb12-af57-4aba-9403-07a999cde053-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.406553 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1109c80c-b07a-4be2-8002-cadfbbc7e0af-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.425145 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hm6g4"] Feb 16 15:24:48 crc kubenswrapper[4835]: W0216 15:24:48.430829 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcd56cf6_8d38_4399_9bbe_e69e57255cca.slice/crio-f5cf6b6243244fd3b00e4f4cf382240a0acdeb87b5dfb45a8cba7f1d9399546c WatchSource:0}: Error finding container f5cf6b6243244fd3b00e4f4cf382240a0acdeb87b5dfb45a8cba7f1d9399546c: Status 404 returned error can't find the container with id f5cf6b6243244fd3b00e4f4cf382240a0acdeb87b5dfb45a8cba7f1d9399546c Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.623874 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rckvw"] Feb 16 15:24:48 crc kubenswrapper[4835]: W0216 15:24:48.645687 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode13e8e74_7a87_4ed8_b8c7_91ec164f5ce5.slice/crio-992849b403eda4eee7d590a18b948af8411c2197aea5298592c97e0b5a53ebbe WatchSource:0}: Error finding container 992849b403eda4eee7d590a18b948af8411c2197aea5298592c97e0b5a53ebbe: Status 404 returned error can't find the container with id 992849b403eda4eee7d590a18b948af8411c2197aea5298592c97e0b5a53ebbe Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.830019 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1ee6-account-create-update-h7mfh" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.879339 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z5tpt" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.913800 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/578241de-b081-44dc-bab5-3ddfba91c2df-operator-scripts\") pod \"578241de-b081-44dc-bab5-3ddfba91c2df\" (UID: \"578241de-b081-44dc-bab5-3ddfba91c2df\") " Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.913874 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdtlj\" (UniqueName: \"kubernetes.io/projected/578241de-b081-44dc-bab5-3ddfba91c2df-kube-api-access-qdtlj\") pod \"578241de-b081-44dc-bab5-3ddfba91c2df\" (UID: \"578241de-b081-44dc-bab5-3ddfba91c2df\") " Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.914588 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/578241de-b081-44dc-bab5-3ddfba91c2df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "578241de-b081-44dc-bab5-3ddfba91c2df" (UID: "578241de-b081-44dc-bab5-3ddfba91c2df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:48 crc kubenswrapper[4835]: I0216 15:24:48.919701 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/578241de-b081-44dc-bab5-3ddfba91c2df-kube-api-access-qdtlj" (OuterVolumeSpecName: "kube-api-access-qdtlj") pod "578241de-b081-44dc-bab5-3ddfba91c2df" (UID: "578241de-b081-44dc-bab5-3ddfba91c2df"). InnerVolumeSpecName "kube-api-access-qdtlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.015457 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95777ba-e0bd-4163-a3b4-7cfc9271a946-operator-scripts\") pod \"e95777ba-e0bd-4163-a3b4-7cfc9271a946\" (UID: \"e95777ba-e0bd-4163-a3b4-7cfc9271a946\") " Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.015717 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz94l\" (UniqueName: \"kubernetes.io/projected/e95777ba-e0bd-4163-a3b4-7cfc9271a946-kube-api-access-dz94l\") pod \"e95777ba-e0bd-4163-a3b4-7cfc9271a946\" (UID: \"e95777ba-e0bd-4163-a3b4-7cfc9271a946\") " Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.015981 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e95777ba-e0bd-4163-a3b4-7cfc9271a946-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e95777ba-e0bd-4163-a3b4-7cfc9271a946" (UID: "e95777ba-e0bd-4163-a3b4-7cfc9271a946"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.016467 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/578241de-b081-44dc-bab5-3ddfba91c2df-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.016488 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdtlj\" (UniqueName: \"kubernetes.io/projected/578241de-b081-44dc-bab5-3ddfba91c2df-kube-api-access-qdtlj\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.016499 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95777ba-e0bd-4163-a3b4-7cfc9271a946-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.018218 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e95777ba-e0bd-4163-a3b4-7cfc9271a946-kube-api-access-dz94l" (OuterVolumeSpecName: "kube-api-access-dz94l") pod "e95777ba-e0bd-4163-a3b4-7cfc9271a946" (UID: "e95777ba-e0bd-4163-a3b4-7cfc9271a946"). InnerVolumeSpecName "kube-api-access-dz94l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.118196 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz94l\" (UniqueName: \"kubernetes.io/projected/e95777ba-e0bd-4163-a3b4-7cfc9271a946-kube-api-access-dz94l\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.412071 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rckvw" event={"ID":"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5","Type":"ContainerStarted","Data":"992849b403eda4eee7d590a18b948af8411c2197aea5298592c97e0b5a53ebbe"} Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.413590 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z5tpt" event={"ID":"e95777ba-e0bd-4163-a3b4-7cfc9271a946","Type":"ContainerDied","Data":"b9542e652dbbac9c75cd94ead703f18da04d68a87fbe3d873eaafaedffb3c293"} Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.413620 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z5tpt" Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.413620 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9542e652dbbac9c75cd94ead703f18da04d68a87fbe3d873eaafaedffb3c293" Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.415890 4835 generic.go:334] "Generic (PLEG): container finished" podID="bcd56cf6-8d38-4399-9bbe-e69e57255cca" containerID="6e0d7a782680e0d85f93b31b0d0dcd169a5401db122807afa70d92c2f4a592e1" exitCode=0 Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.415952 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hm6g4" event={"ID":"bcd56cf6-8d38-4399-9bbe-e69e57255cca","Type":"ContainerDied","Data":"6e0d7a782680e0d85f93b31b0d0dcd169a5401db122807afa70d92c2f4a592e1"} Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.415978 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hm6g4" event={"ID":"bcd56cf6-8d38-4399-9bbe-e69e57255cca","Type":"ContainerStarted","Data":"f5cf6b6243244fd3b00e4f4cf382240a0acdeb87b5dfb45a8cba7f1d9399546c"} Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.417758 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1ee6-account-create-update-h7mfh" event={"ID":"578241de-b081-44dc-bab5-3ddfba91c2df","Type":"ContainerDied","Data":"056220fdb75812aa58abf103439f1d6bda6ceb11bbbd0567accc601b809df860"} Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.417777 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1ee6-account-create-update-h7mfh" Feb 16 15:24:49 crc kubenswrapper[4835]: I0216 15:24:49.417783 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="056220fdb75812aa58abf103439f1d6bda6ceb11bbbd0567accc601b809df860" Feb 16 15:24:50 crc kubenswrapper[4835]: I0216 15:24:50.943717 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-chltq"] Feb 16 15:24:50 crc kubenswrapper[4835]: E0216 15:24:50.944471 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6808008-de25-4d2d-8753-945ad39d27b3" containerName="mariadb-database-create" Feb 16 15:24:50 crc kubenswrapper[4835]: I0216 15:24:50.944487 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6808008-de25-4d2d-8753-945ad39d27b3" containerName="mariadb-database-create" Feb 16 15:24:50 crc kubenswrapper[4835]: E0216 15:24:50.944511 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e95777ba-e0bd-4163-a3b4-7cfc9271a946" containerName="mariadb-database-create" Feb 16 15:24:50 crc kubenswrapper[4835]: I0216 15:24:50.944519 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e95777ba-e0bd-4163-a3b4-7cfc9271a946" containerName="mariadb-database-create" Feb 16 15:24:50 crc kubenswrapper[4835]: E0216 15:24:50.944548 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4724bb12-af57-4aba-9403-07a999cde053" containerName="mariadb-account-create-update" Feb 16 15:24:50 crc kubenswrapper[4835]: I0216 15:24:50.944557 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4724bb12-af57-4aba-9403-07a999cde053" containerName="mariadb-account-create-update" Feb 16 15:24:50 crc kubenswrapper[4835]: E0216 15:24:50.944582 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1109c80c-b07a-4be2-8002-cadfbbc7e0af" containerName="mariadb-account-create-update" Feb 16 15:24:50 crc kubenswrapper[4835]: I0216 15:24:50.944590 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1109c80c-b07a-4be2-8002-cadfbbc7e0af" containerName="mariadb-account-create-update" Feb 16 15:24:50 crc kubenswrapper[4835]: E0216 15:24:50.944601 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4a1831-012e-4c40-80ef-db237493e6ac" containerName="mariadb-database-create" Feb 16 15:24:50 crc kubenswrapper[4835]: I0216 15:24:50.944608 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4a1831-012e-4c40-80ef-db237493e6ac" containerName="mariadb-database-create" Feb 16 15:24:50 crc kubenswrapper[4835]: E0216 15:24:50.944622 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578241de-b081-44dc-bab5-3ddfba91c2df" containerName="mariadb-account-create-update" Feb 16 15:24:50 crc kubenswrapper[4835]: I0216 15:24:50.944629 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="578241de-b081-44dc-bab5-3ddfba91c2df" containerName="mariadb-account-create-update" Feb 16 15:24:50 crc kubenswrapper[4835]: I0216 15:24:50.944851 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4a1831-012e-4c40-80ef-db237493e6ac" containerName="mariadb-database-create" Feb 16 15:24:50 crc kubenswrapper[4835]: I0216 15:24:50.944862 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6808008-de25-4d2d-8753-945ad39d27b3" containerName="mariadb-database-create" Feb 16 15:24:50 crc kubenswrapper[4835]: I0216 15:24:50.944875 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="578241de-b081-44dc-bab5-3ddfba91c2df" containerName="mariadb-account-create-update" Feb 16 15:24:50 crc kubenswrapper[4835]: I0216 15:24:50.944891 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e95777ba-e0bd-4163-a3b4-7cfc9271a946" containerName="mariadb-database-create" Feb 16 15:24:50 crc kubenswrapper[4835]: I0216 15:24:50.944900 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1109c80c-b07a-4be2-8002-cadfbbc7e0af" containerName="mariadb-account-create-update" Feb 16 15:24:50 crc kubenswrapper[4835]: I0216 15:24:50.944920 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="4724bb12-af57-4aba-9403-07a999cde053" containerName="mariadb-account-create-update" Feb 16 15:24:50 crc kubenswrapper[4835]: I0216 15:24:50.945822 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-chltq" Feb 16 15:24:50 crc kubenswrapper[4835]: I0216 15:24:50.949231 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 16 15:24:50 crc kubenswrapper[4835]: I0216 15:24:50.949384 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-v9jjp" Feb 16 15:24:50 crc kubenswrapper[4835]: I0216 15:24:50.991834 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hm6g4" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.000055 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-chltq"] Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.067925 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-strmn\" (UniqueName: \"kubernetes.io/projected/12db908e-5604-4c20-bfa8-ee01f8bac719-kube-api-access-strmn\") pod \"glance-db-sync-chltq\" (UID: \"12db908e-5604-4c20-bfa8-ee01f8bac719\") " pod="openstack/glance-db-sync-chltq" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.067999 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-db-sync-config-data\") pod \"glance-db-sync-chltq\" (UID: \"12db908e-5604-4c20-bfa8-ee01f8bac719\") " pod="openstack/glance-db-sync-chltq" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.068035 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-combined-ca-bundle\") pod \"glance-db-sync-chltq\" (UID: \"12db908e-5604-4c20-bfa8-ee01f8bac719\") " pod="openstack/glance-db-sync-chltq" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.068119 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-config-data\") pod \"glance-db-sync-chltq\" (UID: \"12db908e-5604-4c20-bfa8-ee01f8bac719\") " pod="openstack/glance-db-sync-chltq" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.169413 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd56cf6-8d38-4399-9bbe-e69e57255cca-operator-scripts\") pod \"bcd56cf6-8d38-4399-9bbe-e69e57255cca\" (UID: \"bcd56cf6-8d38-4399-9bbe-e69e57255cca\") " Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.169477 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-877rw\" (UniqueName: \"kubernetes.io/projected/bcd56cf6-8d38-4399-9bbe-e69e57255cca-kube-api-access-877rw\") pod \"bcd56cf6-8d38-4399-9bbe-e69e57255cca\" (UID: \"bcd56cf6-8d38-4399-9bbe-e69e57255cca\") " Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.169883 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-combined-ca-bundle\") pod \"glance-db-sync-chltq\" (UID: \"12db908e-5604-4c20-bfa8-ee01f8bac719\") " pod="openstack/glance-db-sync-chltq" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.169980 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-config-data\") pod \"glance-db-sync-chltq\" (UID: \"12db908e-5604-4c20-bfa8-ee01f8bac719\") " pod="openstack/glance-db-sync-chltq" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.170189 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-strmn\" (UniqueName: \"kubernetes.io/projected/12db908e-5604-4c20-bfa8-ee01f8bac719-kube-api-access-strmn\") pod \"glance-db-sync-chltq\" (UID: \"12db908e-5604-4c20-bfa8-ee01f8bac719\") " pod="openstack/glance-db-sync-chltq" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.170240 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-db-sync-config-data\") pod \"glance-db-sync-chltq\" (UID: \"12db908e-5604-4c20-bfa8-ee01f8bac719\") " pod="openstack/glance-db-sync-chltq" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.170348 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcd56cf6-8d38-4399-9bbe-e69e57255cca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcd56cf6-8d38-4399-9bbe-e69e57255cca" (UID: "bcd56cf6-8d38-4399-9bbe-e69e57255cca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.174607 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcd56cf6-8d38-4399-9bbe-e69e57255cca-kube-api-access-877rw" (OuterVolumeSpecName: "kube-api-access-877rw") pod "bcd56cf6-8d38-4399-9bbe-e69e57255cca" (UID: "bcd56cf6-8d38-4399-9bbe-e69e57255cca"). InnerVolumeSpecName "kube-api-access-877rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.176018 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-config-data\") pod \"glance-db-sync-chltq\" (UID: \"12db908e-5604-4c20-bfa8-ee01f8bac719\") " pod="openstack/glance-db-sync-chltq" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.176102 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-combined-ca-bundle\") pod \"glance-db-sync-chltq\" (UID: \"12db908e-5604-4c20-bfa8-ee01f8bac719\") " pod="openstack/glance-db-sync-chltq" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.197876 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-strmn\" (UniqueName: \"kubernetes.io/projected/12db908e-5604-4c20-bfa8-ee01f8bac719-kube-api-access-strmn\") pod \"glance-db-sync-chltq\" (UID: \"12db908e-5604-4c20-bfa8-ee01f8bac719\") " pod="openstack/glance-db-sync-chltq" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.200603 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-db-sync-config-data\") pod \"glance-db-sync-chltq\" (UID: \"12db908e-5604-4c20-bfa8-ee01f8bac719\") " pod="openstack/glance-db-sync-chltq" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.271574 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd56cf6-8d38-4399-9bbe-e69e57255cca-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.271611 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-877rw\" (UniqueName: \"kubernetes.io/projected/bcd56cf6-8d38-4399-9bbe-e69e57255cca-kube-api-access-877rw\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.314738 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-chltq" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.442711 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hm6g4" event={"ID":"bcd56cf6-8d38-4399-9bbe-e69e57255cca","Type":"ContainerDied","Data":"f5cf6b6243244fd3b00e4f4cf382240a0acdeb87b5dfb45a8cba7f1d9399546c"} Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.442769 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5cf6b6243244fd3b00e4f4cf382240a0acdeb87b5dfb45a8cba7f1d9399546c" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.442853 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hm6g4" Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.678698 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:51 crc kubenswrapper[4835]: E0216 15:24:51.678839 4835 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:24:51 crc kubenswrapper[4835]: E0216 15:24:51.678852 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:24:51 crc kubenswrapper[4835]: E0216 15:24:51.678892 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift podName:89ca92c6-cb91-49f1-a005-047759f93742 nodeName:}" failed. No retries permitted until 2026-02-16 15:24:59.678877347 +0000 UTC m=+1048.970870242 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift") pod "swift-storage-0" (UID: "89ca92c6-cb91-49f1-a005-047759f93742") : configmap "swift-ring-files" not found Feb 16 15:24:51 crc kubenswrapper[4835]: I0216 15:24:51.707113 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.059202 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 15:24:52 crc kubenswrapper[4835]: E0216 15:24:52.060133 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcd56cf6-8d38-4399-9bbe-e69e57255cca" containerName="mariadb-account-create-update" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.060152 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd56cf6-8d38-4399-9bbe-e69e57255cca" containerName="mariadb-account-create-update" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.060387 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcd56cf6-8d38-4399-9bbe-e69e57255cca" containerName="mariadb-account-create-update" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.061573 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.079977 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.080107 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.080175 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.080305 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-mt6qz" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.112616 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.186350 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01805b9f-62e0-4054-84e9-6e6e6a448afc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.186469 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01805b9f-62e0-4054-84e9-6e6e6a448afc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.186509 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01805b9f-62e0-4054-84e9-6e6e6a448afc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.186545 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tkm9\" (UniqueName: \"kubernetes.io/projected/01805b9f-62e0-4054-84e9-6e6e6a448afc-kube-api-access-6tkm9\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.186589 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01805b9f-62e0-4054-84e9-6e6e6a448afc-config\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.186623 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/01805b9f-62e0-4054-84e9-6e6e6a448afc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.186672 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01805b9f-62e0-4054-84e9-6e6e6a448afc-scripts\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.288205 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01805b9f-62e0-4054-84e9-6e6e6a448afc-scripts\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.288296 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01805b9f-62e0-4054-84e9-6e6e6a448afc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.288359 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01805b9f-62e0-4054-84e9-6e6e6a448afc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.288385 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01805b9f-62e0-4054-84e9-6e6e6a448afc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.288401 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tkm9\" (UniqueName: \"kubernetes.io/projected/01805b9f-62e0-4054-84e9-6e6e6a448afc-kube-api-access-6tkm9\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.288431 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01805b9f-62e0-4054-84e9-6e6e6a448afc-config\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.288459 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/01805b9f-62e0-4054-84e9-6e6e6a448afc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.290145 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01805b9f-62e0-4054-84e9-6e6e6a448afc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.290850 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01805b9f-62e0-4054-84e9-6e6e6a448afc-scripts\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.294181 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01805b9f-62e0-4054-84e9-6e6e6a448afc-config\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.294206 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01805b9f-62e0-4054-84e9-6e6e6a448afc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.302956 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/01805b9f-62e0-4054-84e9-6e6e6a448afc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.304181 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01805b9f-62e0-4054-84e9-6e6e6a448afc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.327209 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tkm9\" (UniqueName: \"kubernetes.io/projected/01805b9f-62e0-4054-84e9-6e6e6a448afc-kube-api-access-6tkm9\") pod \"ovn-northd-0\" (UID: \"01805b9f-62e0-4054-84e9-6e6e6a448afc\") " pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.391642 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 15:24:52 crc kubenswrapper[4835]: I0216 15:24:52.712656 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 15:24:53 crc kubenswrapper[4835]: I0216 15:24:53.145657 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:24:53 crc kubenswrapper[4835]: I0216 15:24:53.210342 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wfxbr"] Feb 16 15:24:53 crc kubenswrapper[4835]: I0216 15:24:53.210613 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" podUID="6935249e-0331-48cf-9b65-1db34f680e9b" containerName="dnsmasq-dns" containerID="cri-o://f0d2fc7aea7bf3ef44dfbc82dc0be5ddf1b1e7b1ae375315c189aea7a8ce7332" gracePeriod=10 Feb 16 15:24:53 crc kubenswrapper[4835]: I0216 15:24:53.257053 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="e246a943-0c6d-4738-8a73-d3e576819680" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:24:53 crc kubenswrapper[4835]: I0216 15:24:53.304159 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" podUID="6935249e-0331-48cf-9b65-1db34f680e9b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Feb 16 15:24:53 crc kubenswrapper[4835]: I0216 15:24:53.419563 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:24:53 crc kubenswrapper[4835]: I0216 15:24:53.467830 4835 generic.go:334] "Generic (PLEG): container finished" podID="6935249e-0331-48cf-9b65-1db34f680e9b" containerID="f0d2fc7aea7bf3ef44dfbc82dc0be5ddf1b1e7b1ae375315c189aea7a8ce7332" exitCode=0 Feb 16 15:24:53 crc kubenswrapper[4835]: I0216 15:24:53.467870 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" event={"ID":"6935249e-0331-48cf-9b65-1db34f680e9b","Type":"ContainerDied","Data":"f0d2fc7aea7bf3ef44dfbc82dc0be5ddf1b1e7b1ae375315c189aea7a8ce7332"} Feb 16 15:24:53 crc kubenswrapper[4835]: I0216 15:24:53.884185 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hm6g4"] Feb 16 15:24:53 crc kubenswrapper[4835]: I0216 15:24:53.897025 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hm6g4"] Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.054062 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.129478 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-dns-svc\") pod \"6935249e-0331-48cf-9b65-1db34f680e9b\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.129579 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-ovsdbserver-nb\") pod \"6935249e-0331-48cf-9b65-1db34f680e9b\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.129633 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5x5n\" (UniqueName: \"kubernetes.io/projected/6935249e-0331-48cf-9b65-1db34f680e9b-kube-api-access-l5x5n\") pod \"6935249e-0331-48cf-9b65-1db34f680e9b\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.129650 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-ovsdbserver-sb\") pod \"6935249e-0331-48cf-9b65-1db34f680e9b\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.129745 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-config\") pod \"6935249e-0331-48cf-9b65-1db34f680e9b\" (UID: \"6935249e-0331-48cf-9b65-1db34f680e9b\") " Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.138651 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6935249e-0331-48cf-9b65-1db34f680e9b-kube-api-access-l5x5n" (OuterVolumeSpecName: "kube-api-access-l5x5n") pod "6935249e-0331-48cf-9b65-1db34f680e9b" (UID: "6935249e-0331-48cf-9b65-1db34f680e9b"). InnerVolumeSpecName "kube-api-access-l5x5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.191270 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6935249e-0331-48cf-9b65-1db34f680e9b" (UID: "6935249e-0331-48cf-9b65-1db34f680e9b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.197602 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6935249e-0331-48cf-9b65-1db34f680e9b" (UID: "6935249e-0331-48cf-9b65-1db34f680e9b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.205276 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-config" (OuterVolumeSpecName: "config") pod "6935249e-0331-48cf-9b65-1db34f680e9b" (UID: "6935249e-0331-48cf-9b65-1db34f680e9b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.210904 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6935249e-0331-48cf-9b65-1db34f680e9b" (UID: "6935249e-0331-48cf-9b65-1db34f680e9b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.231749 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.231798 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.231814 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5x5n\" (UniqueName: \"kubernetes.io/projected/6935249e-0331-48cf-9b65-1db34f680e9b-kube-api-access-l5x5n\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.231825 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.231837 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6935249e-0331-48cf-9b65-1db34f680e9b-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.257018 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.476496 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"01805b9f-62e0-4054-84e9-6e6e6a448afc","Type":"ContainerStarted","Data":"849ddc87e0f1e073905f3fedfea3837f9cdc32c4f1864be0eaa16111508e5bf0"} Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.479973 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e2800ecb-4ec5-4930-a820-d9680894ad21","Type":"ContainerStarted","Data":"2d6e922e925875c7cfa0b01fc2f519c054566cd5c0496ee93719de5a9b26f74f"} Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.481570 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rckvw" event={"ID":"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5","Type":"ContainerStarted","Data":"ebc6319e15f4d38d278218f7838f833466a7457e43782a291fbd51ea0e9aebc6"} Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.483335 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" event={"ID":"6935249e-0331-48cf-9b65-1db34f680e9b","Type":"ContainerDied","Data":"aa99c5c33082172300225d75a9e1ac29d0ebcdcd01ad653008087d602ecd5d02"} Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.483383 4835 scope.go:117] "RemoveContainer" containerID="f0d2fc7aea7bf3ef44dfbc82dc0be5ddf1b1e7b1ae375315c189aea7a8ce7332" Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.483386 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-wfxbr" Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.505579 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.367397863 podStartE2EDuration="1m2.505561025s" podCreationTimestamp="2026-02-16 15:23:52 +0000 UTC" firstStartedPulling="2026-02-16 15:24:09.595431589 +0000 UTC m=+998.887424484" lastFinishedPulling="2026-02-16 15:24:53.733594751 +0000 UTC m=+1043.025587646" observedRunningTime="2026-02-16 15:24:54.50457272 +0000 UTC m=+1043.796565615" watchObservedRunningTime="2026-02-16 15:24:54.505561025 +0000 UTC m=+1043.797553920" Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.509585 4835 scope.go:117] "RemoveContainer" containerID="ef301357d26b9e1f0c3eacece96b8e4aa87ad8779875d7a6ae4f1d205e74b9c0" Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.525477 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wfxbr"] Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.532922 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-wfxbr"] Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.546301 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-rckvw" podStartSLOduration=2.434660839 podStartE2EDuration="7.546282311s" podCreationTimestamp="2026-02-16 15:24:47 +0000 UTC" firstStartedPulling="2026-02-16 15:24:48.652048399 +0000 UTC m=+1037.944041294" lastFinishedPulling="2026-02-16 15:24:53.763669871 +0000 UTC m=+1043.055662766" observedRunningTime="2026-02-16 15:24:54.541490807 +0000 UTC m=+1043.833483702" watchObservedRunningTime="2026-02-16 15:24:54.546282311 +0000 UTC m=+1043.838275226" Feb 16 15:24:54 crc kubenswrapper[4835]: I0216 15:24:54.687086 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-chltq"] Feb 16 15:24:54 crc kubenswrapper[4835]: W0216 15:24:54.698716 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12db908e_5604_4c20_bfa8_ee01f8bac719.slice/crio-f621106238418b7d0a0a58fc4fbcc5760a4d9a77be880cb01a1004b22550380c WatchSource:0}: Error finding container f621106238418b7d0a0a58fc4fbcc5760a4d9a77be880cb01a1004b22550380c: Status 404 returned error can't find the container with id f621106238418b7d0a0a58fc4fbcc5760a4d9a77be880cb01a1004b22550380c Feb 16 15:24:55 crc kubenswrapper[4835]: I0216 15:24:55.388241 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6935249e-0331-48cf-9b65-1db34f680e9b" path="/var/lib/kubelet/pods/6935249e-0331-48cf-9b65-1db34f680e9b/volumes" Feb 16 15:24:55 crc kubenswrapper[4835]: I0216 15:24:55.389132 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcd56cf6-8d38-4399-9bbe-e69e57255cca" path="/var/lib/kubelet/pods/bcd56cf6-8d38-4399-9bbe-e69e57255cca/volumes" Feb 16 15:24:55 crc kubenswrapper[4835]: I0216 15:24:55.501709 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-chltq" event={"ID":"12db908e-5604-4c20-bfa8-ee01f8bac719","Type":"ContainerStarted","Data":"f621106238418b7d0a0a58fc4fbcc5760a4d9a77be880cb01a1004b22550380c"} Feb 16 15:24:56 crc kubenswrapper[4835]: I0216 15:24:56.516379 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"01805b9f-62e0-4054-84e9-6e6e6a448afc","Type":"ContainerStarted","Data":"bc8afc19b0da5061d939907e27443bea01bf58829eee97402f329ede509b1ee5"} Feb 16 15:24:56 crc kubenswrapper[4835]: I0216 15:24:56.517283 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"01805b9f-62e0-4054-84e9-6e6e6a448afc","Type":"ContainerStarted","Data":"aa0305e1c134706c89d4e62df9c6db661074a52c507ee4e2dcdbfbd48aaa7e8b"} Feb 16 15:24:56 crc kubenswrapper[4835]: I0216 15:24:56.518675 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 15:24:56 crc kubenswrapper[4835]: I0216 15:24:56.563986 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.387285096 podStartE2EDuration="4.563932024s" podCreationTimestamp="2026-02-16 15:24:52 +0000 UTC" firstStartedPulling="2026-02-16 15:24:54.251860577 +0000 UTC m=+1043.543853472" lastFinishedPulling="2026-02-16 15:24:55.428507505 +0000 UTC m=+1044.720500400" observedRunningTime="2026-02-16 15:24:56.541348689 +0000 UTC m=+1045.833341624" watchObservedRunningTime="2026-02-16 15:24:56.563932024 +0000 UTC m=+1045.855924949" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.345675 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.646511 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-85znn"] Feb 16 15:24:57 crc kubenswrapper[4835]: E0216 15:24:57.646894 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6935249e-0331-48cf-9b65-1db34f680e9b" containerName="init" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.646909 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6935249e-0331-48cf-9b65-1db34f680e9b" containerName="init" Feb 16 15:24:57 crc kubenswrapper[4835]: E0216 15:24:57.646941 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6935249e-0331-48cf-9b65-1db34f680e9b" containerName="dnsmasq-dns" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.646949 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6935249e-0331-48cf-9b65-1db34f680e9b" containerName="dnsmasq-dns" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.647115 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6935249e-0331-48cf-9b65-1db34f680e9b" containerName="dnsmasq-dns" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.647734 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-85znn" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.665920 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-85znn"] Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.720281 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e896978-24a0-4f5e-bbc8-e33a887a98c0-operator-scripts\") pod \"cinder-db-create-85znn\" (UID: \"6e896978-24a0-4f5e-bbc8-e33a887a98c0\") " pod="openstack/cinder-db-create-85znn" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.720742 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpf2t\" (UniqueName: \"kubernetes.io/projected/6e896978-24a0-4f5e-bbc8-e33a887a98c0-kube-api-access-vpf2t\") pod \"cinder-db-create-85znn\" (UID: \"6e896978-24a0-4f5e-bbc8-e33a887a98c0\") " pod="openstack/cinder-db-create-85znn" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.744232 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-90e7-account-create-update-s5jqp"] Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.745801 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-90e7-account-create-update-s5jqp" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.747391 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.775467 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-90e7-account-create-update-s5jqp"] Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.822065 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d-operator-scripts\") pod \"cinder-90e7-account-create-update-s5jqp\" (UID: \"f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d\") " pod="openstack/cinder-90e7-account-create-update-s5jqp" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.822397 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpf2t\" (UniqueName: \"kubernetes.io/projected/6e896978-24a0-4f5e-bbc8-e33a887a98c0-kube-api-access-vpf2t\") pod \"cinder-db-create-85znn\" (UID: \"6e896978-24a0-4f5e-bbc8-e33a887a98c0\") " pod="openstack/cinder-db-create-85znn" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.822432 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e896978-24a0-4f5e-bbc8-e33a887a98c0-operator-scripts\") pod \"cinder-db-create-85znn\" (UID: \"6e896978-24a0-4f5e-bbc8-e33a887a98c0\") " pod="openstack/cinder-db-create-85znn" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.822463 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5ztc\" (UniqueName: \"kubernetes.io/projected/f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d-kube-api-access-s5ztc\") pod \"cinder-90e7-account-create-update-s5jqp\" (UID: \"f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d\") " pod="openstack/cinder-90e7-account-create-update-s5jqp" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.823287 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e896978-24a0-4f5e-bbc8-e33a887a98c0-operator-scripts\") pod \"cinder-db-create-85znn\" (UID: \"6e896978-24a0-4f5e-bbc8-e33a887a98c0\") " pod="openstack/cinder-db-create-85znn" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.830982 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-srq6s"] Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.832576 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-srq6s" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.843513 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-srq6s"] Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.868355 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpf2t\" (UniqueName: \"kubernetes.io/projected/6e896978-24a0-4f5e-bbc8-e33a887a98c0-kube-api-access-vpf2t\") pod \"cinder-db-create-85znn\" (UID: \"6e896978-24a0-4f5e-bbc8-e33a887a98c0\") " pod="openstack/cinder-db-create-85znn" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.923736 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2afc8716-571e-4e91-8a87-61e144cb3e91-operator-scripts\") pod \"cloudkitty-db-create-srq6s\" (UID: \"2afc8716-571e-4e91-8a87-61e144cb3e91\") " pod="openstack/cloudkitty-db-create-srq6s" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.923782 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65w9l\" (UniqueName: \"kubernetes.io/projected/2afc8716-571e-4e91-8a87-61e144cb3e91-kube-api-access-65w9l\") pod \"cloudkitty-db-create-srq6s\" (UID: \"2afc8716-571e-4e91-8a87-61e144cb3e91\") " pod="openstack/cloudkitty-db-create-srq6s" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.923823 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d-operator-scripts\") pod \"cinder-90e7-account-create-update-s5jqp\" (UID: \"f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d\") " pod="openstack/cinder-90e7-account-create-update-s5jqp" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.923875 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5ztc\" (UniqueName: \"kubernetes.io/projected/f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d-kube-api-access-s5ztc\") pod \"cinder-90e7-account-create-update-s5jqp\" (UID: \"f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d\") " pod="openstack/cinder-90e7-account-create-update-s5jqp" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.924779 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d-operator-scripts\") pod \"cinder-90e7-account-create-update-s5jqp\" (UID: \"f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d\") " pod="openstack/cinder-90e7-account-create-update-s5jqp" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.946964 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5ztc\" (UniqueName: \"kubernetes.io/projected/f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d-kube-api-access-s5ztc\") pod \"cinder-90e7-account-create-update-s5jqp\" (UID: \"f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d\") " pod="openstack/cinder-90e7-account-create-update-s5jqp" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.949808 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0b76-account-create-update-m2jgq"] Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.950961 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0b76-account-create-update-m2jgq" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.970789 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.973451 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0b76-account-create-update-m2jgq"] Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.973623 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:24:57 crc kubenswrapper[4835]: I0216 15:24:57.987273 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-85znn" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.019309 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-9gx68"] Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.020576 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9gx68" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.025872 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2afc8716-571e-4e91-8a87-61e144cb3e91-operator-scripts\") pod \"cloudkitty-db-create-srq6s\" (UID: \"2afc8716-571e-4e91-8a87-61e144cb3e91\") " pod="openstack/cloudkitty-db-create-srq6s" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.025908 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65w9l\" (UniqueName: \"kubernetes.io/projected/2afc8716-571e-4e91-8a87-61e144cb3e91-kube-api-access-65w9l\") pod \"cloudkitty-db-create-srq6s\" (UID: \"2afc8716-571e-4e91-8a87-61e144cb3e91\") " pod="openstack/cloudkitty-db-create-srq6s" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.025952 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dac99ad-85aa-4faf-b55d-224a04e2b659-operator-scripts\") pod \"barbican-0b76-account-create-update-m2jgq\" (UID: \"7dac99ad-85aa-4faf-b55d-224a04e2b659\") " pod="openstack/barbican-0b76-account-create-update-m2jgq" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.026050 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpz2j\" (UniqueName: \"kubernetes.io/projected/7dac99ad-85aa-4faf-b55d-224a04e2b659-kube-api-access-xpz2j\") pod \"barbican-0b76-account-create-update-m2jgq\" (UID: \"7dac99ad-85aa-4faf-b55d-224a04e2b659\") " pod="openstack/barbican-0b76-account-create-update-m2jgq" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.026770 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2afc8716-571e-4e91-8a87-61e144cb3e91-operator-scripts\") pod \"cloudkitty-db-create-srq6s\" (UID: \"2afc8716-571e-4e91-8a87-61e144cb3e91\") " pod="openstack/cloudkitty-db-create-srq6s" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.042608 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.044589 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9gx68"] Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.046139 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.051296 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.056116 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-78dkx" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.090377 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-90e7-account-create-update-s5jqp" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.090820 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65w9l\" (UniqueName: \"kubernetes.io/projected/2afc8716-571e-4e91-8a87-61e144cb3e91-kube-api-access-65w9l\") pod \"cloudkitty-db-create-srq6s\" (UID: \"2afc8716-571e-4e91-8a87-61e144cb3e91\") " pod="openstack/cloudkitty-db-create-srq6s" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.114924 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-ec07-account-create-update-mllh2"] Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.120503 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-ec07-account-create-update-mllh2" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.128992 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.130156 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ac121e-3070-48a4-94df-938421346b96-combined-ca-bundle\") pod \"keystone-db-sync-9gx68\" (UID: \"99ac121e-3070-48a4-94df-938421346b96\") " pod="openstack/keystone-db-sync-9gx68" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.130234 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j92g5\" (UniqueName: \"kubernetes.io/projected/99ac121e-3070-48a4-94df-938421346b96-kube-api-access-j92g5\") pod \"keystone-db-sync-9gx68\" (UID: \"99ac121e-3070-48a4-94df-938421346b96\") " pod="openstack/keystone-db-sync-9gx68" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.130294 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dac99ad-85aa-4faf-b55d-224a04e2b659-operator-scripts\") pod \"barbican-0b76-account-create-update-m2jgq\" (UID: \"7dac99ad-85aa-4faf-b55d-224a04e2b659\") " pod="openstack/barbican-0b76-account-create-update-m2jgq" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.130358 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpz2j\" (UniqueName: \"kubernetes.io/projected/7dac99ad-85aa-4faf-b55d-224a04e2b659-kube-api-access-xpz2j\") pod \"barbican-0b76-account-create-update-m2jgq\" (UID: \"7dac99ad-85aa-4faf-b55d-224a04e2b659\") " pod="openstack/barbican-0b76-account-create-update-m2jgq" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.130374 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ac121e-3070-48a4-94df-938421346b96-config-data\") pod \"keystone-db-sync-9gx68\" (UID: \"99ac121e-3070-48a4-94df-938421346b96\") " pod="openstack/keystone-db-sync-9gx68" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.133051 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dac99ad-85aa-4faf-b55d-224a04e2b659-operator-scripts\") pod \"barbican-0b76-account-create-update-m2jgq\" (UID: \"7dac99ad-85aa-4faf-b55d-224a04e2b659\") " pod="openstack/barbican-0b76-account-create-update-m2jgq" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.154391 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-srq6s" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.196336 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-ec07-account-create-update-mllh2"] Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.196500 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpz2j\" (UniqueName: \"kubernetes.io/projected/7dac99ad-85aa-4faf-b55d-224a04e2b659-kube-api-access-xpz2j\") pod \"barbican-0b76-account-create-update-m2jgq\" (UID: \"7dac99ad-85aa-4faf-b55d-224a04e2b659\") " pod="openstack/barbican-0b76-account-create-update-m2jgq" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.232678 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ac121e-3070-48a4-94df-938421346b96-config-data\") pod \"keystone-db-sync-9gx68\" (UID: \"99ac121e-3070-48a4-94df-938421346b96\") " pod="openstack/keystone-db-sync-9gx68" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.232833 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcphf\" (UniqueName: \"kubernetes.io/projected/7ee22ae0-4c3f-4a60-ad86-7f909b157b6b-kube-api-access-pcphf\") pod \"cloudkitty-ec07-account-create-update-mllh2\" (UID: \"7ee22ae0-4c3f-4a60-ad86-7f909b157b6b\") " pod="openstack/cloudkitty-ec07-account-create-update-mllh2" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.232871 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ac121e-3070-48a4-94df-938421346b96-combined-ca-bundle\") pod \"keystone-db-sync-9gx68\" (UID: \"99ac121e-3070-48a4-94df-938421346b96\") " pod="openstack/keystone-db-sync-9gx68" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.232917 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j92g5\" (UniqueName: \"kubernetes.io/projected/99ac121e-3070-48a4-94df-938421346b96-kube-api-access-j92g5\") pod \"keystone-db-sync-9gx68\" (UID: \"99ac121e-3070-48a4-94df-938421346b96\") " pod="openstack/keystone-db-sync-9gx68" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.232949 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ee22ae0-4c3f-4a60-ad86-7f909b157b6b-operator-scripts\") pod \"cloudkitty-ec07-account-create-update-mllh2\" (UID: \"7ee22ae0-4c3f-4a60-ad86-7f909b157b6b\") " pod="openstack/cloudkitty-ec07-account-create-update-mllh2" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.233653 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-dtg9h"] Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.234873 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dtg9h" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.237972 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ac121e-3070-48a4-94df-938421346b96-config-data\") pod \"keystone-db-sync-9gx68\" (UID: \"99ac121e-3070-48a4-94df-938421346b96\") " pod="openstack/keystone-db-sync-9gx68" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.244499 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ac121e-3070-48a4-94df-938421346b96-combined-ca-bundle\") pod \"keystone-db-sync-9gx68\" (UID: \"99ac121e-3070-48a4-94df-938421346b96\") " pod="openstack/keystone-db-sync-9gx68" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.261994 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-dtg9h"] Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.262731 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j92g5\" (UniqueName: \"kubernetes.io/projected/99ac121e-3070-48a4-94df-938421346b96-kube-api-access-j92g5\") pod \"keystone-db-sync-9gx68\" (UID: \"99ac121e-3070-48a4-94df-938421346b96\") " pod="openstack/keystone-db-sync-9gx68" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.291778 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-bm5j9"] Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.293015 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bm5j9" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.335762 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bm5j9"] Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.336743 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43d210b1-a527-417c-a74b-e3363616d04b-operator-scripts\") pod \"barbican-db-create-dtg9h\" (UID: \"43d210b1-a527-417c-a74b-e3363616d04b\") " pod="openstack/barbican-db-create-dtg9h" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.336791 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq472\" (UniqueName: \"kubernetes.io/projected/43d210b1-a527-417c-a74b-e3363616d04b-kube-api-access-vq472\") pod \"barbican-db-create-dtg9h\" (UID: \"43d210b1-a527-417c-a74b-e3363616d04b\") " pod="openstack/barbican-db-create-dtg9h" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.336834 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/719b9570-6729-4015-bbaa-0865b16b86d6-operator-scripts\") pod \"neutron-db-create-bm5j9\" (UID: \"719b9570-6729-4015-bbaa-0865b16b86d6\") " pod="openstack/neutron-db-create-bm5j9" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.336925 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcphf\" (UniqueName: \"kubernetes.io/projected/7ee22ae0-4c3f-4a60-ad86-7f909b157b6b-kube-api-access-pcphf\") pod \"cloudkitty-ec07-account-create-update-mllh2\" (UID: \"7ee22ae0-4c3f-4a60-ad86-7f909b157b6b\") " pod="openstack/cloudkitty-ec07-account-create-update-mllh2" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.336948 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt4rj\" (UniqueName: \"kubernetes.io/projected/719b9570-6729-4015-bbaa-0865b16b86d6-kube-api-access-tt4rj\") pod \"neutron-db-create-bm5j9\" (UID: \"719b9570-6729-4015-bbaa-0865b16b86d6\") " pod="openstack/neutron-db-create-bm5j9" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.337062 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ee22ae0-4c3f-4a60-ad86-7f909b157b6b-operator-scripts\") pod \"cloudkitty-ec07-account-create-update-mllh2\" (UID: \"7ee22ae0-4c3f-4a60-ad86-7f909b157b6b\") " pod="openstack/cloudkitty-ec07-account-create-update-mllh2" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.337827 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ee22ae0-4c3f-4a60-ad86-7f909b157b6b-operator-scripts\") pod \"cloudkitty-ec07-account-create-update-mllh2\" (UID: \"7ee22ae0-4c3f-4a60-ad86-7f909b157b6b\") " pod="openstack/cloudkitty-ec07-account-create-update-mllh2" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.356110 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2711-account-create-update-97ktr"] Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.357785 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2711-account-create-update-97ktr" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.359915 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0b76-account-create-update-m2jgq" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.363727 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.370204 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2711-account-create-update-97ktr"] Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.401847 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcphf\" (UniqueName: \"kubernetes.io/projected/7ee22ae0-4c3f-4a60-ad86-7f909b157b6b-kube-api-access-pcphf\") pod \"cloudkitty-ec07-account-create-update-mllh2\" (UID: \"7ee22ae0-4c3f-4a60-ad86-7f909b157b6b\") " pod="openstack/cloudkitty-ec07-account-create-update-mllh2" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.438958 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/719b9570-6729-4015-bbaa-0865b16b86d6-operator-scripts\") pod \"neutron-db-create-bm5j9\" (UID: \"719b9570-6729-4015-bbaa-0865b16b86d6\") " pod="openstack/neutron-db-create-bm5j9" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.439194 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt4rj\" (UniqueName: \"kubernetes.io/projected/719b9570-6729-4015-bbaa-0865b16b86d6-kube-api-access-tt4rj\") pod \"neutron-db-create-bm5j9\" (UID: \"719b9570-6729-4015-bbaa-0865b16b86d6\") " pod="openstack/neutron-db-create-bm5j9" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.439338 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc-operator-scripts\") pod \"neutron-2711-account-create-update-97ktr\" (UID: \"d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc\") " pod="openstack/neutron-2711-account-create-update-97ktr" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.439426 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgx4z\" (UniqueName: \"kubernetes.io/projected/d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc-kube-api-access-vgx4z\") pod \"neutron-2711-account-create-update-97ktr\" (UID: \"d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc\") " pod="openstack/neutron-2711-account-create-update-97ktr" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.439543 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43d210b1-a527-417c-a74b-e3363616d04b-operator-scripts\") pod \"barbican-db-create-dtg9h\" (UID: \"43d210b1-a527-417c-a74b-e3363616d04b\") " pod="openstack/barbican-db-create-dtg9h" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.439624 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/719b9570-6729-4015-bbaa-0865b16b86d6-operator-scripts\") pod \"neutron-db-create-bm5j9\" (UID: \"719b9570-6729-4015-bbaa-0865b16b86d6\") " pod="openstack/neutron-db-create-bm5j9" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.439634 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq472\" (UniqueName: \"kubernetes.io/projected/43d210b1-a527-417c-a74b-e3363616d04b-kube-api-access-vq472\") pod \"barbican-db-create-dtg9h\" (UID: \"43d210b1-a527-417c-a74b-e3363616d04b\") " pod="openstack/barbican-db-create-dtg9h" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.440345 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43d210b1-a527-417c-a74b-e3363616d04b-operator-scripts\") pod \"barbican-db-create-dtg9h\" (UID: \"43d210b1-a527-417c-a74b-e3363616d04b\") " pod="openstack/barbican-db-create-dtg9h" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.458008 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq472\" (UniqueName: \"kubernetes.io/projected/43d210b1-a527-417c-a74b-e3363616d04b-kube-api-access-vq472\") pod \"barbican-db-create-dtg9h\" (UID: \"43d210b1-a527-417c-a74b-e3363616d04b\") " pod="openstack/barbican-db-create-dtg9h" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.461427 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt4rj\" (UniqueName: \"kubernetes.io/projected/719b9570-6729-4015-bbaa-0865b16b86d6-kube-api-access-tt4rj\") pod \"neutron-db-create-bm5j9\" (UID: \"719b9570-6729-4015-bbaa-0865b16b86d6\") " pod="openstack/neutron-db-create-bm5j9" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.491100 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9gx68" Feb 16 15:24:58 crc kubenswrapper[4835]: I0216 15:24:58.498890 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-ec07-account-create-update-mllh2" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:58.555746 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc-operator-scripts\") pod \"neutron-2711-account-create-update-97ktr\" (UID: \"d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc\") " pod="openstack/neutron-2711-account-create-update-97ktr" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:58.556063 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgx4z\" (UniqueName: \"kubernetes.io/projected/d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc-kube-api-access-vgx4z\") pod \"neutron-2711-account-create-update-97ktr\" (UID: \"d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc\") " pod="openstack/neutron-2711-account-create-update-97ktr" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:58.556863 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc-operator-scripts\") pod \"neutron-2711-account-create-update-97ktr\" (UID: \"d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc\") " pod="openstack/neutron-2711-account-create-update-97ktr" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:58.582125 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgx4z\" (UniqueName: \"kubernetes.io/projected/d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc-kube-api-access-vgx4z\") pod \"neutron-2711-account-create-update-97ktr\" (UID: \"d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc\") " pod="openstack/neutron-2711-account-create-update-97ktr" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:58.583261 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dtg9h" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:58.625293 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bm5j9" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:58.706168 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2711-account-create-update-97ktr" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:58.724908 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-90e7-account-create-update-s5jqp"] Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:58.874811 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:58.891136 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qkfw5"] Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:58.892955 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qkfw5" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:58.896602 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:58.910399 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qkfw5"] Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:58.966342 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prbtt\" (UniqueName: \"kubernetes.io/projected/af17e5a7-3351-4d2c-9214-fc8221a15fe9-kube-api-access-prbtt\") pod \"root-account-create-update-qkfw5\" (UID: \"af17e5a7-3351-4d2c-9214-fc8221a15fe9\") " pod="openstack/root-account-create-update-qkfw5" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:58.966474 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af17e5a7-3351-4d2c-9214-fc8221a15fe9-operator-scripts\") pod \"root-account-create-update-qkfw5\" (UID: \"af17e5a7-3351-4d2c-9214-fc8221a15fe9\") " pod="openstack/root-account-create-update-qkfw5" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:59.068259 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af17e5a7-3351-4d2c-9214-fc8221a15fe9-operator-scripts\") pod \"root-account-create-update-qkfw5\" (UID: \"af17e5a7-3351-4d2c-9214-fc8221a15fe9\") " pod="openstack/root-account-create-update-qkfw5" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:59.068388 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prbtt\" (UniqueName: \"kubernetes.io/projected/af17e5a7-3351-4d2c-9214-fc8221a15fe9-kube-api-access-prbtt\") pod \"root-account-create-update-qkfw5\" (UID: \"af17e5a7-3351-4d2c-9214-fc8221a15fe9\") " pod="openstack/root-account-create-update-qkfw5" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:59.069325 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af17e5a7-3351-4d2c-9214-fc8221a15fe9-operator-scripts\") pod \"root-account-create-update-qkfw5\" (UID: \"af17e5a7-3351-4d2c-9214-fc8221a15fe9\") " pod="openstack/root-account-create-update-qkfw5" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:59.087828 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prbtt\" (UniqueName: \"kubernetes.io/projected/af17e5a7-3351-4d2c-9214-fc8221a15fe9-kube-api-access-prbtt\") pod \"root-account-create-update-qkfw5\" (UID: \"af17e5a7-3351-4d2c-9214-fc8221a15fe9\") " pod="openstack/root-account-create-update-qkfw5" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:59.257086 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qkfw5" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:59.552631 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-90e7-account-create-update-s5jqp" event={"ID":"f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d","Type":"ContainerStarted","Data":"94f39ec6499a3160833fb194cf31bf253ecfc1329210bd08638866af6476c0be"} Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:59.553036 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-90e7-account-create-update-s5jqp" event={"ID":"f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d","Type":"ContainerStarted","Data":"be6298f63cec901597520ed3385918213376acde3199d3d2e407cc93f93c1d82"} Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:59.584796 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-90e7-account-create-update-s5jqp" podStartSLOduration=2.5847815069999998 podStartE2EDuration="2.584781507s" podCreationTimestamp="2026-02-16 15:24:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:24:59.582607411 +0000 UTC m=+1048.874600306" watchObservedRunningTime="2026-02-16 15:24:59.584781507 +0000 UTC m=+1048.876774402" Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:59.688221 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:24:59 crc kubenswrapper[4835]: E0216 15:24:59.688430 4835 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:24:59 crc kubenswrapper[4835]: E0216 15:24:59.688458 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:24:59 crc kubenswrapper[4835]: E0216 15:24:59.688513 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift podName:89ca92c6-cb91-49f1-a005-047759f93742 nodeName:}" failed. No retries permitted until 2026-02-16 15:25:15.688496666 +0000 UTC m=+1064.980489561 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift") pod "swift-storage-0" (UID: "89ca92c6-cb91-49f1-a005-047759f93742") : configmap "swift-ring-files" not found Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:59.925383 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-srq6s"] Feb 16 15:24:59 crc kubenswrapper[4835]: W0216 15:24:59.930807 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2afc8716_571e_4e91_8a87_61e144cb3e91.slice/crio-30b7fe0c8628966851ec39684f1c8e04a8a9090c4f22a173ae7aa800b998e517 WatchSource:0}: Error finding container 30b7fe0c8628966851ec39684f1c8e04a8a9090c4f22a173ae7aa800b998e517: Status 404 returned error can't find the container with id 30b7fe0c8628966851ec39684f1c8e04a8a9090c4f22a173ae7aa800b998e517 Feb 16 15:24:59 crc kubenswrapper[4835]: I0216 15:24:59.972961 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-85znn"] Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.307618 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bm5j9"] Feb 16 15:25:00 crc kubenswrapper[4835]: W0216 15:25:00.332878 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod719b9570_6729_4015_bbaa_0865b16b86d6.slice/crio-1c0dff4761e62689dcb3d7ae741f0985134f9c616e6bc23fd015832619f1f26e WatchSource:0}: Error finding container 1c0dff4761e62689dcb3d7ae741f0985134f9c616e6bc23fd015832619f1f26e: Status 404 returned error can't find the container with id 1c0dff4761e62689dcb3d7ae741f0985134f9c616e6bc23fd015832619f1f26e Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.347570 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2711-account-create-update-97ktr"] Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.359101 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-dtg9h"] Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.408289 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-9gx68"] Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.442604 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0b76-account-create-update-m2jgq"] Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.452321 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-ec07-account-create-update-mllh2"] Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.462088 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qkfw5"] Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.561793 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2711-account-create-update-97ktr" event={"ID":"d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc","Type":"ContainerStarted","Data":"02fabca4164cdb76bf8de78cd206d0856d2dec9e2252eb4b45b768d155184e43"} Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.562594 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0b76-account-create-update-m2jgq" event={"ID":"7dac99ad-85aa-4faf-b55d-224a04e2b659","Type":"ContainerStarted","Data":"32b349d2b2967e5ed44bdae77ba39b5397909be5de5d3c52c9309d1433eae5b5"} Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.564548 4835 generic.go:334] "Generic (PLEG): container finished" podID="f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d" containerID="94f39ec6499a3160833fb194cf31bf253ecfc1329210bd08638866af6476c0be" exitCode=0 Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.564623 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-90e7-account-create-update-s5jqp" event={"ID":"f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d","Type":"ContainerDied","Data":"94f39ec6499a3160833fb194cf31bf253ecfc1329210bd08638866af6476c0be"} Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.567565 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9gx68" event={"ID":"99ac121e-3070-48a4-94df-938421346b96","Type":"ContainerStarted","Data":"d32fed931e451ef3416ab281340325da8e9e244ce32f64b88696140d217dd344"} Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.569071 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-ec07-account-create-update-mllh2" event={"ID":"7ee22ae0-4c3f-4a60-ad86-7f909b157b6b","Type":"ContainerStarted","Data":"ff5d06cd9f1f29de59a2a7a5d355e3cf1c6a17f0a1c4a367d9086a2e7ed3afb7"} Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.570302 4835 generic.go:334] "Generic (PLEG): container finished" podID="6e896978-24a0-4f5e-bbc8-e33a887a98c0" containerID="1b1268a9a545c4aa86c1fd07bb99fab9493d57738a823acd6d45606f02c6434d" exitCode=0 Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.570333 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-85znn" event={"ID":"6e896978-24a0-4f5e-bbc8-e33a887a98c0","Type":"ContainerDied","Data":"1b1268a9a545c4aa86c1fd07bb99fab9493d57738a823acd6d45606f02c6434d"} Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.570496 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-85znn" event={"ID":"6e896978-24a0-4f5e-bbc8-e33a887a98c0","Type":"ContainerStarted","Data":"3cf0325119211b87874b2b2bcdd1cd0441f085cde718ae75bf36c438cbfb185e"} Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.572408 4835 generic.go:334] "Generic (PLEG): container finished" podID="2afc8716-571e-4e91-8a87-61e144cb3e91" containerID="dad597ac6a40611bf47d7a079dca5cd3d946c265017395986fc2cfc357ed8c5d" exitCode=0 Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.572444 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-srq6s" event={"ID":"2afc8716-571e-4e91-8a87-61e144cb3e91","Type":"ContainerDied","Data":"dad597ac6a40611bf47d7a079dca5cd3d946c265017395986fc2cfc357ed8c5d"} Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.572491 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-srq6s" event={"ID":"2afc8716-571e-4e91-8a87-61e144cb3e91","Type":"ContainerStarted","Data":"30b7fe0c8628966851ec39684f1c8e04a8a9090c4f22a173ae7aa800b998e517"} Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.576589 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qkfw5" event={"ID":"af17e5a7-3351-4d2c-9214-fc8221a15fe9","Type":"ContainerStarted","Data":"9336d0b29107773387ae5755faa7c46a9a89088f3bb9387ab7e5c67522a1589a"} Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.583343 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-dtg9h" event={"ID":"43d210b1-a527-417c-a74b-e3363616d04b","Type":"ContainerStarted","Data":"99e898f4ad5a014b497927215b9716951791f5ccfd927f47f343290024c7f9d5"} Feb 16 15:25:00 crc kubenswrapper[4835]: I0216 15:25:00.585338 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bm5j9" event={"ID":"719b9570-6729-4015-bbaa-0865b16b86d6","Type":"ContainerStarted","Data":"1c0dff4761e62689dcb3d7ae741f0985134f9c616e6bc23fd015832619f1f26e"} Feb 16 15:25:01 crc kubenswrapper[4835]: I0216 15:25:01.599900 4835 generic.go:334] "Generic (PLEG): container finished" podID="7dac99ad-85aa-4faf-b55d-224a04e2b659" containerID="99212ac33b423a848f75337c7d41067df1ea00a35a71f61b28557e9648f0765a" exitCode=0 Feb 16 15:25:01 crc kubenswrapper[4835]: I0216 15:25:01.599994 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0b76-account-create-update-m2jgq" event={"ID":"7dac99ad-85aa-4faf-b55d-224a04e2b659","Type":"ContainerDied","Data":"99212ac33b423a848f75337c7d41067df1ea00a35a71f61b28557e9648f0765a"} Feb 16 15:25:01 crc kubenswrapper[4835]: I0216 15:25:01.601676 4835 generic.go:334] "Generic (PLEG): container finished" podID="7ee22ae0-4c3f-4a60-ad86-7f909b157b6b" containerID="9edd71ad76f0980e335e1a267c67fef8470920e4d4757633d84f7614c90b4194" exitCode=0 Feb 16 15:25:01 crc kubenswrapper[4835]: I0216 15:25:01.602502 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-ec07-account-create-update-mllh2" event={"ID":"7ee22ae0-4c3f-4a60-ad86-7f909b157b6b","Type":"ContainerDied","Data":"9edd71ad76f0980e335e1a267c67fef8470920e4d4757633d84f7614c90b4194"} Feb 16 15:25:01 crc kubenswrapper[4835]: I0216 15:25:01.603213 4835 generic.go:334] "Generic (PLEG): container finished" podID="e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5" containerID="ebc6319e15f4d38d278218f7838f833466a7457e43782a291fbd51ea0e9aebc6" exitCode=0 Feb 16 15:25:01 crc kubenswrapper[4835]: I0216 15:25:01.603252 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rckvw" event={"ID":"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5","Type":"ContainerDied","Data":"ebc6319e15f4d38d278218f7838f833466a7457e43782a291fbd51ea0e9aebc6"} Feb 16 15:25:01 crc kubenswrapper[4835]: I0216 15:25:01.604781 4835 generic.go:334] "Generic (PLEG): container finished" podID="af17e5a7-3351-4d2c-9214-fc8221a15fe9" containerID="b6ad8c630aa31e5cde2c9cacc1c22d78625cb36cd838c0dfefbc12fbed84e951" exitCode=0 Feb 16 15:25:01 crc kubenswrapper[4835]: I0216 15:25:01.604871 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qkfw5" event={"ID":"af17e5a7-3351-4d2c-9214-fc8221a15fe9","Type":"ContainerDied","Data":"b6ad8c630aa31e5cde2c9cacc1c22d78625cb36cd838c0dfefbc12fbed84e951"} Feb 16 15:25:01 crc kubenswrapper[4835]: I0216 15:25:01.605988 4835 generic.go:334] "Generic (PLEG): container finished" podID="43d210b1-a527-417c-a74b-e3363616d04b" containerID="380fedb0b1f9685f32232358d41c8717f51db16139290b11d055a794b9ef69bf" exitCode=0 Feb 16 15:25:01 crc kubenswrapper[4835]: I0216 15:25:01.606023 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-dtg9h" event={"ID":"43d210b1-a527-417c-a74b-e3363616d04b","Type":"ContainerDied","Data":"380fedb0b1f9685f32232358d41c8717f51db16139290b11d055a794b9ef69bf"} Feb 16 15:25:01 crc kubenswrapper[4835]: I0216 15:25:01.607282 4835 generic.go:334] "Generic (PLEG): container finished" podID="719b9570-6729-4015-bbaa-0865b16b86d6" containerID="bc9048a113e4b19e6c72db3a50337c1145168cff317a85fc85129ffce3837462" exitCode=0 Feb 16 15:25:01 crc kubenswrapper[4835]: I0216 15:25:01.607350 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bm5j9" event={"ID":"719b9570-6729-4015-bbaa-0865b16b86d6","Type":"ContainerDied","Data":"bc9048a113e4b19e6c72db3a50337c1145168cff317a85fc85129ffce3837462"} Feb 16 15:25:01 crc kubenswrapper[4835]: I0216 15:25:01.608598 4835 generic.go:334] "Generic (PLEG): container finished" podID="d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc" containerID="1b465b241de862723ce7f3bfd761cd47da79889f07ddc8886b34ef2a1064e885" exitCode=0 Feb 16 15:25:01 crc kubenswrapper[4835]: I0216 15:25:01.608678 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2711-account-create-update-97ktr" event={"ID":"d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc","Type":"ContainerDied","Data":"1b465b241de862723ce7f3bfd761cd47da79889f07ddc8886b34ef2a1064e885"} Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.084811 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-95gmb" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.094520 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-srq6s" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.176500 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2afc8716-571e-4e91-8a87-61e144cb3e91-operator-scripts\") pod \"2afc8716-571e-4e91-8a87-61e144cb3e91\" (UID: \"2afc8716-571e-4e91-8a87-61e144cb3e91\") " Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.176573 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65w9l\" (UniqueName: \"kubernetes.io/projected/2afc8716-571e-4e91-8a87-61e144cb3e91-kube-api-access-65w9l\") pod \"2afc8716-571e-4e91-8a87-61e144cb3e91\" (UID: \"2afc8716-571e-4e91-8a87-61e144cb3e91\") " Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.177282 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2afc8716-571e-4e91-8a87-61e144cb3e91-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2afc8716-571e-4e91-8a87-61e144cb3e91" (UID: "2afc8716-571e-4e91-8a87-61e144cb3e91"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.186328 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2afc8716-571e-4e91-8a87-61e144cb3e91-kube-api-access-65w9l" (OuterVolumeSpecName: "kube-api-access-65w9l") pod "2afc8716-571e-4e91-8a87-61e144cb3e91" (UID: "2afc8716-571e-4e91-8a87-61e144cb3e91"). InnerVolumeSpecName "kube-api-access-65w9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.278501 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2afc8716-571e-4e91-8a87-61e144cb3e91-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.278550 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65w9l\" (UniqueName: \"kubernetes.io/projected/2afc8716-571e-4e91-8a87-61e144cb3e91-kube-api-access-65w9l\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.280265 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-90e7-account-create-update-s5jqp" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.289018 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-85znn" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.316113 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-ts8jt" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.369363 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.379608 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e896978-24a0-4f5e-bbc8-e33a887a98c0-operator-scripts\") pod \"6e896978-24a0-4f5e-bbc8-e33a887a98c0\" (UID: \"6e896978-24a0-4f5e-bbc8-e33a887a98c0\") " Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.379696 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5ztc\" (UniqueName: \"kubernetes.io/projected/f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d-kube-api-access-s5ztc\") pod \"f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d\" (UID: \"f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d\") " Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.379730 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d-operator-scripts\") pod \"f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d\" (UID: \"f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d\") " Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.380040 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpf2t\" (UniqueName: \"kubernetes.io/projected/6e896978-24a0-4f5e-bbc8-e33a887a98c0-kube-api-access-vpf2t\") pod \"6e896978-24a0-4f5e-bbc8-e33a887a98c0\" (UID: \"6e896978-24a0-4f5e-bbc8-e33a887a98c0\") " Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.380979 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e896978-24a0-4f5e-bbc8-e33a887a98c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e896978-24a0-4f5e-bbc8-e33a887a98c0" (UID: "6e896978-24a0-4f5e-bbc8-e33a887a98c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.381211 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d" (UID: "f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.384100 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d-kube-api-access-s5ztc" (OuterVolumeSpecName: "kube-api-access-s5ztc") pod "f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d" (UID: "f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d"). InnerVolumeSpecName "kube-api-access-s5ztc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.393059 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e896978-24a0-4f5e-bbc8-e33a887a98c0-kube-api-access-vpf2t" (OuterVolumeSpecName: "kube-api-access-vpf2t") pod "6e896978-24a0-4f5e-bbc8-e33a887a98c0" (UID: "6e896978-24a0-4f5e-bbc8-e33a887a98c0"). InnerVolumeSpecName "kube-api-access-vpf2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.482600 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpf2t\" (UniqueName: \"kubernetes.io/projected/6e896978-24a0-4f5e-bbc8-e33a887a98c0-kube-api-access-vpf2t\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.482633 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e896978-24a0-4f5e-bbc8-e33a887a98c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.482642 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5ztc\" (UniqueName: \"kubernetes.io/projected/f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d-kube-api-access-s5ztc\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.482652 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.618897 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-90e7-account-create-update-s5jqp" event={"ID":"f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d","Type":"ContainerDied","Data":"be6298f63cec901597520ed3385918213376acde3199d3d2e407cc93f93c1d82"} Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.618933 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be6298f63cec901597520ed3385918213376acde3199d3d2e407cc93f93c1d82" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.618979 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-90e7-account-create-update-s5jqp" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.621208 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-85znn" event={"ID":"6e896978-24a0-4f5e-bbc8-e33a887a98c0","Type":"ContainerDied","Data":"3cf0325119211b87874b2b2bcdd1cd0441f085cde718ae75bf36c438cbfb185e"} Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.621226 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cf0325119211b87874b2b2bcdd1cd0441f085cde718ae75bf36c438cbfb185e" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.621285 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-85znn" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.623001 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-srq6s" event={"ID":"2afc8716-571e-4e91-8a87-61e144cb3e91","Type":"ContainerDied","Data":"30b7fe0c8628966851ec39684f1c8e04a8a9090c4f22a173ae7aa800b998e517"} Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.623047 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30b7fe0c8628966851ec39684f1c8e04a8a9090c4f22a173ae7aa800b998e517" Feb 16 15:25:02 crc kubenswrapper[4835]: I0216 15:25:02.623171 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-srq6s" Feb 16 15:25:03 crc kubenswrapper[4835]: I0216 15:25:03.257446 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="e246a943-0c6d-4738-8a73-d3e576819680" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:25:08 crc kubenswrapper[4835]: I0216 15:25:08.874499 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:08 crc kubenswrapper[4835]: I0216 15:25:08.877178 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:09 crc kubenswrapper[4835]: I0216 15:25:09.276772 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dtg9h" Feb 16 15:25:09 crc kubenswrapper[4835]: I0216 15:25:09.416466 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43d210b1-a527-417c-a74b-e3363616d04b-operator-scripts\") pod \"43d210b1-a527-417c-a74b-e3363616d04b\" (UID: \"43d210b1-a527-417c-a74b-e3363616d04b\") " Feb 16 15:25:09 crc kubenswrapper[4835]: I0216 15:25:09.416740 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq472\" (UniqueName: \"kubernetes.io/projected/43d210b1-a527-417c-a74b-e3363616d04b-kube-api-access-vq472\") pod \"43d210b1-a527-417c-a74b-e3363616d04b\" (UID: \"43d210b1-a527-417c-a74b-e3363616d04b\") " Feb 16 15:25:09 crc kubenswrapper[4835]: I0216 15:25:09.417178 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43d210b1-a527-417c-a74b-e3363616d04b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "43d210b1-a527-417c-a74b-e3363616d04b" (UID: "43d210b1-a527-417c-a74b-e3363616d04b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:09 crc kubenswrapper[4835]: I0216 15:25:09.417518 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43d210b1-a527-417c-a74b-e3363616d04b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:09 crc kubenswrapper[4835]: I0216 15:25:09.431451 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43d210b1-a527-417c-a74b-e3363616d04b-kube-api-access-vq472" (OuterVolumeSpecName: "kube-api-access-vq472") pod "43d210b1-a527-417c-a74b-e3363616d04b" (UID: "43d210b1-a527-417c-a74b-e3363616d04b"). InnerVolumeSpecName "kube-api-access-vq472". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:09 crc kubenswrapper[4835]: I0216 15:25:09.520061 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq472\" (UniqueName: \"kubernetes.io/projected/43d210b1-a527-417c-a74b-e3363616d04b-kube-api-access-vq472\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:09 crc kubenswrapper[4835]: I0216 15:25:09.698246 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-dtg9h" event={"ID":"43d210b1-a527-417c-a74b-e3363616d04b","Type":"ContainerDied","Data":"99e898f4ad5a014b497927215b9716951791f5ccfd927f47f343290024c7f9d5"} Feb 16 15:25:09 crc kubenswrapper[4835]: I0216 15:25:09.698523 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99e898f4ad5a014b497927215b9716951791f5ccfd927f47f343290024c7f9d5" Feb 16 15:25:09 crc kubenswrapper[4835]: I0216 15:25:09.698254 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dtg9h" Feb 16 15:25:09 crc kubenswrapper[4835]: I0216 15:25:09.699606 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:11 crc kubenswrapper[4835]: I0216 15:25:11.262276 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-st4vx" podUID="2efbff9d-b303-430c-b06c-36b79284a3f1" containerName="ovn-controller" probeResult="failure" output=< Feb 16 15:25:11 crc kubenswrapper[4835]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 15:25:11 crc kubenswrapper[4835]: > Feb 16 15:25:11 crc kubenswrapper[4835]: I0216 15:25:11.984802 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:25:11 crc kubenswrapper[4835]: I0216 15:25:11.985369 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerName="config-reloader" containerID="cri-o://9d23443d6e0d527badad6636d6f2c9bfc145e4f8471aaf2183e464f1930f93b2" gracePeriod=600 Feb 16 15:25:11 crc kubenswrapper[4835]: I0216 15:25:11.985095 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerName="prometheus" containerID="cri-o://7fe625906ae9bc94a2919299e32c02960c711d849f6b00249e1dc090fe97af11" gracePeriod=600 Feb 16 15:25:11 crc kubenswrapper[4835]: I0216 15:25:11.985360 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerName="thanos-sidecar" containerID="cri-o://2d6e922e925875c7cfa0b01fc2f519c054566cd5c0496ee93719de5a9b26f74f" gracePeriod=600 Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.443488 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 15:25:12 crc kubenswrapper[4835]: E0216 15:25:12.636146 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Feb 16 15:25:12 crc kubenswrapper[4835]: E0216 15:25:12.636278 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j92g5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-9gx68_openstack(99ac121e-3070-48a4-94df-938421346b96): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:25:12 crc kubenswrapper[4835]: E0216 15:25:12.637477 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-9gx68" podUID="99ac121e-3070-48a4-94df-938421346b96" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.722430 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2711-account-create-update-97ktr" event={"ID":"d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc","Type":"ContainerDied","Data":"02fabca4164cdb76bf8de78cd206d0856d2dec9e2252eb4b45b768d155184e43"} Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.722467 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02fabca4164cdb76bf8de78cd206d0856d2dec9e2252eb4b45b768d155184e43" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.724660 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0b76-account-create-update-m2jgq" event={"ID":"7dac99ad-85aa-4faf-b55d-224a04e2b659","Type":"ContainerDied","Data":"32b349d2b2967e5ed44bdae77ba39b5397909be5de5d3c52c9309d1433eae5b5"} Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.724677 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32b349d2b2967e5ed44bdae77ba39b5397909be5de5d3c52c9309d1433eae5b5" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.725960 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-ec07-account-create-update-mllh2" event={"ID":"7ee22ae0-4c3f-4a60-ad86-7f909b157b6b","Type":"ContainerDied","Data":"ff5d06cd9f1f29de59a2a7a5d355e3cf1c6a17f0a1c4a367d9086a2e7ed3afb7"} Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.725986 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff5d06cd9f1f29de59a2a7a5d355e3cf1c6a17f0a1c4a367d9086a2e7ed3afb7" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.727683 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rckvw" event={"ID":"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5","Type":"ContainerDied","Data":"992849b403eda4eee7d590a18b948af8411c2197aea5298592c97e0b5a53ebbe"} Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.727741 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="992849b403eda4eee7d590a18b948af8411c2197aea5298592c97e0b5a53ebbe" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.739645 4835 generic.go:334] "Generic (PLEG): container finished" podID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerID="2d6e922e925875c7cfa0b01fc2f519c054566cd5c0496ee93719de5a9b26f74f" exitCode=0 Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.739883 4835 generic.go:334] "Generic (PLEG): container finished" podID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerID="9d23443d6e0d527badad6636d6f2c9bfc145e4f8471aaf2183e464f1930f93b2" exitCode=0 Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.739952 4835 generic.go:334] "Generic (PLEG): container finished" podID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerID="7fe625906ae9bc94a2919299e32c02960c711d849f6b00249e1dc090fe97af11" exitCode=0 Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.739717 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e2800ecb-4ec5-4930-a820-d9680894ad21","Type":"ContainerDied","Data":"2d6e922e925875c7cfa0b01fc2f519c054566cd5c0496ee93719de5a9b26f74f"} Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.740124 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e2800ecb-4ec5-4930-a820-d9680894ad21","Type":"ContainerDied","Data":"9d23443d6e0d527badad6636d6f2c9bfc145e4f8471aaf2183e464f1930f93b2"} Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.740186 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e2800ecb-4ec5-4930-a820-d9680894ad21","Type":"ContainerDied","Data":"7fe625906ae9bc94a2919299e32c02960c711d849f6b00249e1dc090fe97af11"} Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.741906 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qkfw5" event={"ID":"af17e5a7-3351-4d2c-9214-fc8221a15fe9","Type":"ContainerDied","Data":"9336d0b29107773387ae5755faa7c46a9a89088f3bb9387ab7e5c67522a1589a"} Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.741944 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9336d0b29107773387ae5755faa7c46a9a89088f3bb9387ab7e5c67522a1589a" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.744663 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bm5j9" event={"ID":"719b9570-6729-4015-bbaa-0865b16b86d6","Type":"ContainerDied","Data":"1c0dff4761e62689dcb3d7ae741f0985134f9c616e6bc23fd015832619f1f26e"} Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.744704 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c0dff4761e62689dcb3d7ae741f0985134f9c616e6bc23fd015832619f1f26e" Feb 16 15:25:12 crc kubenswrapper[4835]: E0216 15:25:12.746566 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-9gx68" podUID="99ac121e-3070-48a4-94df-938421346b96" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.850606 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bm5j9" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.855801 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qkfw5" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.909269 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2711-account-create-update-97ktr" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.909711 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-ec07-account-create-update-mllh2" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.913684 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.920735 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0b76-account-create-update-m2jgq" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.937731 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af17e5a7-3351-4d2c-9214-fc8221a15fe9-operator-scripts\") pod \"af17e5a7-3351-4d2c-9214-fc8221a15fe9\" (UID: \"af17e5a7-3351-4d2c-9214-fc8221a15fe9\") " Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.937796 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-ring-data-devices\") pod \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.937817 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt4rj\" (UniqueName: \"kubernetes.io/projected/719b9570-6729-4015-bbaa-0865b16b86d6-kube-api-access-tt4rj\") pod \"719b9570-6729-4015-bbaa-0865b16b86d6\" (UID: \"719b9570-6729-4015-bbaa-0865b16b86d6\") " Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.937852 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-scripts\") pod \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.937873 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpz2j\" (UniqueName: \"kubernetes.io/projected/7dac99ad-85aa-4faf-b55d-224a04e2b659-kube-api-access-xpz2j\") pod \"7dac99ad-85aa-4faf-b55d-224a04e2b659\" (UID: \"7dac99ad-85aa-4faf-b55d-224a04e2b659\") " Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.937897 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ee22ae0-4c3f-4a60-ad86-7f909b157b6b-operator-scripts\") pod \"7ee22ae0-4c3f-4a60-ad86-7f909b157b6b\" (UID: \"7ee22ae0-4c3f-4a60-ad86-7f909b157b6b\") " Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.937916 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqf42\" (UniqueName: \"kubernetes.io/projected/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-kube-api-access-kqf42\") pod \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.937942 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-swiftconf\") pod \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.937983 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prbtt\" (UniqueName: \"kubernetes.io/projected/af17e5a7-3351-4d2c-9214-fc8221a15fe9-kube-api-access-prbtt\") pod \"af17e5a7-3351-4d2c-9214-fc8221a15fe9\" (UID: \"af17e5a7-3351-4d2c-9214-fc8221a15fe9\") " Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.938006 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dac99ad-85aa-4faf-b55d-224a04e2b659-operator-scripts\") pod \"7dac99ad-85aa-4faf-b55d-224a04e2b659\" (UID: \"7dac99ad-85aa-4faf-b55d-224a04e2b659\") " Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.938028 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcphf\" (UniqueName: \"kubernetes.io/projected/7ee22ae0-4c3f-4a60-ad86-7f909b157b6b-kube-api-access-pcphf\") pod \"7ee22ae0-4c3f-4a60-ad86-7f909b157b6b\" (UID: \"7ee22ae0-4c3f-4a60-ad86-7f909b157b6b\") " Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.938053 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-dispersionconf\") pod \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.938078 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-etc-swift\") pod \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.938100 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc-operator-scripts\") pod \"d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc\" (UID: \"d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc\") " Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.938159 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-combined-ca-bundle\") pod \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\" (UID: \"e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5\") " Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.938183 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgx4z\" (UniqueName: \"kubernetes.io/projected/d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc-kube-api-access-vgx4z\") pod \"d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc\" (UID: \"d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc\") " Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.938242 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/719b9570-6729-4015-bbaa-0865b16b86d6-operator-scripts\") pod \"719b9570-6729-4015-bbaa-0865b16b86d6\" (UID: \"719b9570-6729-4015-bbaa-0865b16b86d6\") " Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.938964 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719b9570-6729-4015-bbaa-0865b16b86d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "719b9570-6729-4015-bbaa-0865b16b86d6" (UID: "719b9570-6729-4015-bbaa-0865b16b86d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.939393 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af17e5a7-3351-4d2c-9214-fc8221a15fe9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af17e5a7-3351-4d2c-9214-fc8221a15fe9" (UID: "af17e5a7-3351-4d2c-9214-fc8221a15fe9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.939803 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dac99ad-85aa-4faf-b55d-224a04e2b659-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7dac99ad-85aa-4faf-b55d-224a04e2b659" (UID: "7dac99ad-85aa-4faf-b55d-224a04e2b659"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.945385 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5" (UID: "e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.947219 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af17e5a7-3351-4d2c-9214-fc8221a15fe9-kube-api-access-prbtt" (OuterVolumeSpecName: "kube-api-access-prbtt") pod "af17e5a7-3351-4d2c-9214-fc8221a15fe9" (UID: "af17e5a7-3351-4d2c-9214-fc8221a15fe9"). InnerVolumeSpecName "kube-api-access-prbtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.947541 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee22ae0-4c3f-4a60-ad86-7f909b157b6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ee22ae0-4c3f-4a60-ad86-7f909b157b6b" (UID: "7ee22ae0-4c3f-4a60-ad86-7f909b157b6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.947639 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5" (UID: "e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.948028 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dac99ad-85aa-4faf-b55d-224a04e2b659-kube-api-access-xpz2j" (OuterVolumeSpecName: "kube-api-access-xpz2j") pod "7dac99ad-85aa-4faf-b55d-224a04e2b659" (UID: "7dac99ad-85aa-4faf-b55d-224a04e2b659"). InnerVolumeSpecName "kube-api-access-xpz2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.950901 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc" (UID: "d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.954008 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-kube-api-access-kqf42" (OuterVolumeSpecName: "kube-api-access-kqf42") pod "e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5" (UID: "e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5"). InnerVolumeSpecName "kube-api-access-kqf42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.954057 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee22ae0-4c3f-4a60-ad86-7f909b157b6b-kube-api-access-pcphf" (OuterVolumeSpecName: "kube-api-access-pcphf") pod "7ee22ae0-4c3f-4a60-ad86-7f909b157b6b" (UID: "7ee22ae0-4c3f-4a60-ad86-7f909b157b6b"). InnerVolumeSpecName "kube-api-access-pcphf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.963208 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc-kube-api-access-vgx4z" (OuterVolumeSpecName: "kube-api-access-vgx4z") pod "d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc" (UID: "d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc"). InnerVolumeSpecName "kube-api-access-vgx4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.972017 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/719b9570-6729-4015-bbaa-0865b16b86d6-kube-api-access-tt4rj" (OuterVolumeSpecName: "kube-api-access-tt4rj") pod "719b9570-6729-4015-bbaa-0865b16b86d6" (UID: "719b9570-6729-4015-bbaa-0865b16b86d6"). InnerVolumeSpecName "kube-api-access-tt4rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.976520 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5" (UID: "e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:12 crc kubenswrapper[4835]: I0216 15:25:12.999642 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5" (UID: "e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.033159 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-scripts" (OuterVolumeSpecName: "scripts") pod "e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5" (UID: "e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.034380 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5" (UID: "e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.039375 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af17e5a7-3351-4d2c-9214-fc8221a15fe9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.039415 4835 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.039424 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt4rj\" (UniqueName: \"kubernetes.io/projected/719b9570-6729-4015-bbaa-0865b16b86d6-kube-api-access-tt4rj\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.039435 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.039445 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpz2j\" (UniqueName: \"kubernetes.io/projected/7dac99ad-85aa-4faf-b55d-224a04e2b659-kube-api-access-xpz2j\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.039455 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ee22ae0-4c3f-4a60-ad86-7f909b157b6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.039464 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqf42\" (UniqueName: \"kubernetes.io/projected/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-kube-api-access-kqf42\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.039485 4835 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.039494 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prbtt\" (UniqueName: \"kubernetes.io/projected/af17e5a7-3351-4d2c-9214-fc8221a15fe9-kube-api-access-prbtt\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.039503 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dac99ad-85aa-4faf-b55d-224a04e2b659-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.039510 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcphf\" (UniqueName: \"kubernetes.io/projected/7ee22ae0-4c3f-4a60-ad86-7f909b157b6b-kube-api-access-pcphf\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.039518 4835 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.039546 4835 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.039558 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.039567 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.039575 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgx4z\" (UniqueName: \"kubernetes.io/projected/d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc-kube-api-access-vgx4z\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.039583 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/719b9570-6729-4015-bbaa-0865b16b86d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.076871 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.243119 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e2800ecb-4ec5-4930-a820-d9680894ad21-config-out\") pod \"e2800ecb-4ec5-4930-a820-d9680894ad21\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.243177 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-2\") pod \"e2800ecb-4ec5-4930-a820-d9680894ad21\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.243378 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\") pod \"e2800ecb-4ec5-4930-a820-d9680894ad21\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.243448 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mskpg\" (UniqueName: \"kubernetes.io/projected/e2800ecb-4ec5-4930-a820-d9680894ad21-kube-api-access-mskpg\") pod \"e2800ecb-4ec5-4930-a820-d9680894ad21\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.243521 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-1\") pod \"e2800ecb-4ec5-4930-a820-d9680894ad21\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.243559 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-0\") pod \"e2800ecb-4ec5-4930-a820-d9680894ad21\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.243575 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-thanos-prometheus-http-client-file\") pod \"e2800ecb-4ec5-4930-a820-d9680894ad21\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.243608 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e2800ecb-4ec5-4930-a820-d9680894ad21-tls-assets\") pod \"e2800ecb-4ec5-4930-a820-d9680894ad21\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.243629 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-web-config\") pod \"e2800ecb-4ec5-4930-a820-d9680894ad21\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.243647 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-config\") pod \"e2800ecb-4ec5-4930-a820-d9680894ad21\" (UID: \"e2800ecb-4ec5-4930-a820-d9680894ad21\") " Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.245193 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "e2800ecb-4ec5-4930-a820-d9680894ad21" (UID: "e2800ecb-4ec5-4930-a820-d9680894ad21"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.246469 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "e2800ecb-4ec5-4930-a820-d9680894ad21" (UID: "e2800ecb-4ec5-4930-a820-d9680894ad21"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.247238 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "e2800ecb-4ec5-4930-a820-d9680894ad21" (UID: "e2800ecb-4ec5-4930-a820-d9680894ad21"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.249155 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2800ecb-4ec5-4930-a820-d9680894ad21-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "e2800ecb-4ec5-4930-a820-d9680894ad21" (UID: "e2800ecb-4ec5-4930-a820-d9680894ad21"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.249603 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2800ecb-4ec5-4930-a820-d9680894ad21-kube-api-access-mskpg" (OuterVolumeSpecName: "kube-api-access-mskpg") pod "e2800ecb-4ec5-4930-a820-d9680894ad21" (UID: "e2800ecb-4ec5-4930-a820-d9680894ad21"). InnerVolumeSpecName "kube-api-access-mskpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.250149 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "e2800ecb-4ec5-4930-a820-d9680894ad21" (UID: "e2800ecb-4ec5-4930-a820-d9680894ad21"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.252179 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2800ecb-4ec5-4930-a820-d9680894ad21-config-out" (OuterVolumeSpecName: "config-out") pod "e2800ecb-4ec5-4930-a820-d9680894ad21" (UID: "e2800ecb-4ec5-4930-a820-d9680894ad21"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.254195 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-config" (OuterVolumeSpecName: "config") pod "e2800ecb-4ec5-4930-a820-d9680894ad21" (UID: "e2800ecb-4ec5-4930-a820-d9680894ad21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.254420 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="e246a943-0c6d-4738-8a73-d3e576819680" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.263626 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "e2800ecb-4ec5-4930-a820-d9680894ad21" (UID: "e2800ecb-4ec5-4930-a820-d9680894ad21"). InnerVolumeSpecName "pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.273135 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-web-config" (OuterVolumeSpecName: "web-config") pod "e2800ecb-4ec5-4930-a820-d9680894ad21" (UID: "e2800ecb-4ec5-4930-a820-d9680894ad21"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.346021 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\") on node \"crc\" " Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.346864 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mskpg\" (UniqueName: \"kubernetes.io/projected/e2800ecb-4ec5-4930-a820-d9680894ad21-kube-api-access-mskpg\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.346884 4835 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.346937 4835 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.346955 4835 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.346968 4835 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e2800ecb-4ec5-4930-a820-d9680894ad21-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.346979 4835 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-web-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.346991 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e2800ecb-4ec5-4930-a820-d9680894ad21-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.347002 4835 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e2800ecb-4ec5-4930-a820-d9680894ad21-config-out\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.347016 4835 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e2800ecb-4ec5-4930-a820-d9680894ad21-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.371927 4835 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.372087 4835 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c") on node "crc" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.449916 4835 reconciler_common.go:293] "Volume detached for volume \"pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.755850 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e2800ecb-4ec5-4930-a820-d9680894ad21","Type":"ContainerDied","Data":"4facf151be714740510472dc244ccd713668c8193975d0f88b7dac871634b4f1"} Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.756826 4835 scope.go:117] "RemoveContainer" containerID="2d6e922e925875c7cfa0b01fc2f519c054566cd5c0496ee93719de5a9b26f74f" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.756213 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.758330 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qkfw5" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.758516 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0b76-account-create-update-m2jgq" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.758586 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2711-account-create-update-97ktr" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.758604 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-chltq" event={"ID":"12db908e-5604-4c20-bfa8-ee01f8bac719","Type":"ContainerStarted","Data":"e020718c964ec483f0caad7938898c6e69a1ecae07b4415440f14dd72065c661"} Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.758686 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rckvw" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.758789 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bm5j9" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.758894 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-ec07-account-create-update-mllh2" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.782139 4835 scope.go:117] "RemoveContainer" containerID="9d23443d6e0d527badad6636d6f2c9bfc145e4f8471aaf2183e464f1930f93b2" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.803184 4835 scope.go:117] "RemoveContainer" containerID="7fe625906ae9bc94a2919299e32c02960c711d849f6b00249e1dc090fe97af11" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.803972 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-chltq" podStartSLOduration=5.778050088 podStartE2EDuration="23.803945064s" podCreationTimestamp="2026-02-16 15:24:50 +0000 UTC" firstStartedPulling="2026-02-16 15:24:54.700311565 +0000 UTC m=+1043.992304460" lastFinishedPulling="2026-02-16 15:25:12.726206541 +0000 UTC m=+1062.018199436" observedRunningTime="2026-02-16 15:25:13.783411802 +0000 UTC m=+1063.075404697" watchObservedRunningTime="2026-02-16 15:25:13.803945064 +0000 UTC m=+1063.095937959" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.828136 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.842301 4835 scope.go:117] "RemoveContainer" containerID="d0fedda1ba590b70251a73c818a04c00be922fd47dcc229ed09e2be530b31a4a" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.869037 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.910686 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:25:13 crc kubenswrapper[4835]: E0216 15:25:13.911233 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc" containerName="mariadb-account-create-update" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911248 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc" containerName="mariadb-account-create-update" Feb 16 15:25:13 crc kubenswrapper[4835]: E0216 15:25:13.911261 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afc8716-571e-4e91-8a87-61e144cb3e91" containerName="mariadb-database-create" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911267 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afc8716-571e-4e91-8a87-61e144cb3e91" containerName="mariadb-database-create" Feb 16 15:25:13 crc kubenswrapper[4835]: E0216 15:25:13.911278 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d210b1-a527-417c-a74b-e3363616d04b" containerName="mariadb-database-create" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911284 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d210b1-a527-417c-a74b-e3363616d04b" containerName="mariadb-database-create" Feb 16 15:25:13 crc kubenswrapper[4835]: E0216 15:25:13.911298 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d" containerName="mariadb-account-create-update" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911306 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d" containerName="mariadb-account-create-update" Feb 16 15:25:13 crc kubenswrapper[4835]: E0216 15:25:13.911312 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerName="thanos-sidecar" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911317 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerName="thanos-sidecar" Feb 16 15:25:13 crc kubenswrapper[4835]: E0216 15:25:13.911323 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af17e5a7-3351-4d2c-9214-fc8221a15fe9" containerName="mariadb-account-create-update" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911329 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="af17e5a7-3351-4d2c-9214-fc8221a15fe9" containerName="mariadb-account-create-update" Feb 16 15:25:13 crc kubenswrapper[4835]: E0216 15:25:13.911336 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerName="config-reloader" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911342 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerName="config-reloader" Feb 16 15:25:13 crc kubenswrapper[4835]: E0216 15:25:13.911357 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e896978-24a0-4f5e-bbc8-e33a887a98c0" containerName="mariadb-database-create" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911362 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e896978-24a0-4f5e-bbc8-e33a887a98c0" containerName="mariadb-database-create" Feb 16 15:25:13 crc kubenswrapper[4835]: E0216 15:25:13.911378 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerName="prometheus" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911383 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerName="prometheus" Feb 16 15:25:13 crc kubenswrapper[4835]: E0216 15:25:13.911392 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="719b9570-6729-4015-bbaa-0865b16b86d6" containerName="mariadb-database-create" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911398 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="719b9570-6729-4015-bbaa-0865b16b86d6" containerName="mariadb-database-create" Feb 16 15:25:13 crc kubenswrapper[4835]: E0216 15:25:13.911408 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dac99ad-85aa-4faf-b55d-224a04e2b659" containerName="mariadb-account-create-update" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911414 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dac99ad-85aa-4faf-b55d-224a04e2b659" containerName="mariadb-account-create-update" Feb 16 15:25:13 crc kubenswrapper[4835]: E0216 15:25:13.911430 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5" containerName="swift-ring-rebalance" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911437 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5" containerName="swift-ring-rebalance" Feb 16 15:25:13 crc kubenswrapper[4835]: E0216 15:25:13.911446 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerName="init-config-reloader" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911452 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerName="init-config-reloader" Feb 16 15:25:13 crc kubenswrapper[4835]: E0216 15:25:13.911463 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee22ae0-4c3f-4a60-ad86-7f909b157b6b" containerName="mariadb-account-create-update" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911468 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee22ae0-4c3f-4a60-ad86-7f909b157b6b" containerName="mariadb-account-create-update" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911663 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee22ae0-4c3f-4a60-ad86-7f909b157b6b" containerName="mariadb-account-create-update" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911681 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2afc8716-571e-4e91-8a87-61e144cb3e91" containerName="mariadb-database-create" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911691 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="719b9570-6729-4015-bbaa-0865b16b86d6" containerName="mariadb-database-create" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911714 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="af17e5a7-3351-4d2c-9214-fc8221a15fe9" containerName="mariadb-account-create-update" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911724 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerName="config-reloader" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911735 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d" containerName="mariadb-account-create-update" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911744 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerName="prometheus" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911759 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e896978-24a0-4f5e-bbc8-e33a887a98c0" containerName="mariadb-database-create" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911771 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5" containerName="swift-ring-rebalance" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911780 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2800ecb-4ec5-4930-a820-d9680894ad21" containerName="thanos-sidecar" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911791 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="43d210b1-a527-417c-a74b-e3363616d04b" containerName="mariadb-database-create" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911804 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dac99ad-85aa-4faf-b55d-224a04e2b659" containerName="mariadb-account-create-update" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.911816 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc" containerName="mariadb-account-create-update" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.913409 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.915847 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-gbtvz" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.916069 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.916178 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.916416 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.916583 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.916748 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.916909 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.918577 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.921139 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 16 15:25:13 crc kubenswrapper[4835]: I0216 15:25:13.927365 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.060154 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-config\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.060198 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.060243 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r5wp\" (UniqueName: \"kubernetes.io/projected/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-kube-api-access-5r5wp\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.060262 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.060285 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.060312 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.060335 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.060355 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.060420 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.060439 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.060462 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.060477 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.060554 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.162183 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.162228 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-config\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.162270 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r5wp\" (UniqueName: \"kubernetes.io/projected/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-kube-api-access-5r5wp\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.162288 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.162313 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.162339 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.162387 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.162406 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.162467 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.162487 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.162512 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.162544 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.162570 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.163462 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.164089 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.164355 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.166925 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.167193 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.167360 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.169252 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.169961 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.170020 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.170064 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5d10da0f3db55069b2d5c8d1b5275ce7c3d76b215aa95646bfe310a7bc72f24b/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.170938 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.172207 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.179941 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-config\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.181565 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r5wp\" (UniqueName: \"kubernetes.io/projected/a20bb04f-11d1-4f24-a96d-2c451f98b8bd-kube-api-access-5r5wp\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.230194 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c21c5fae-6d75-49e5-9958-46fb3c75bf2c\") pod \"prometheus-metric-storage-0\" (UID: \"a20bb04f-11d1-4f24-a96d-2c451f98b8bd\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.235679 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.717403 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:25:14 crc kubenswrapper[4835]: I0216 15:25:14.767893 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a20bb04f-11d1-4f24-a96d-2c451f98b8bd","Type":"ContainerStarted","Data":"162e76ec93874d8d09dc3965f3a717bc3213f01f15e854e8885ea1feae891e5c"} Feb 16 15:25:15 crc kubenswrapper[4835]: I0216 15:25:15.397908 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2800ecb-4ec5-4930-a820-d9680894ad21" path="/var/lib/kubelet/pods/e2800ecb-4ec5-4930-a820-d9680894ad21/volumes" Feb 16 15:25:15 crc kubenswrapper[4835]: I0216 15:25:15.698859 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:25:15 crc kubenswrapper[4835]: I0216 15:25:15.711517 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89ca92c6-cb91-49f1-a005-047759f93742-etc-swift\") pod \"swift-storage-0\" (UID: \"89ca92c6-cb91-49f1-a005-047759f93742\") " pod="openstack/swift-storage-0" Feb 16 15:25:15 crc kubenswrapper[4835]: I0216 15:25:15.909281 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 15:25:16 crc kubenswrapper[4835]: I0216 15:25:16.252540 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-st4vx" podUID="2efbff9d-b303-430c-b06c-36b79284a3f1" containerName="ovn-controller" probeResult="failure" output=< Feb 16 15:25:16 crc kubenswrapper[4835]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 15:25:16 crc kubenswrapper[4835]: > Feb 16 15:25:16 crc kubenswrapper[4835]: I0216 15:25:16.290518 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:25:16 crc kubenswrapper[4835]: I0216 15:25:16.586363 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 15:25:16 crc kubenswrapper[4835]: I0216 15:25:16.790111 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89ca92c6-cb91-49f1-a005-047759f93742","Type":"ContainerStarted","Data":"f1078cfa3e0c2b213a73abcb0db0f078e39a9cf3fabb0948ad0f835b1e678cac"} Feb 16 15:25:17 crc kubenswrapper[4835]: I0216 15:25:17.800289 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a20bb04f-11d1-4f24-a96d-2c451f98b8bd","Type":"ContainerStarted","Data":"59424e5950298a07610e42be8f431b558b16a1ae26074700db7771a9e159b17a"} Feb 16 15:25:18 crc kubenswrapper[4835]: I0216 15:25:18.816564 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89ca92c6-cb91-49f1-a005-047759f93742","Type":"ContainerStarted","Data":"241f9931d305f42a6c723330321f162b4eb821082250b9712dbf8482ecced2b3"} Feb 16 15:25:18 crc kubenswrapper[4835]: I0216 15:25:18.816608 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89ca92c6-cb91-49f1-a005-047759f93742","Type":"ContainerStarted","Data":"ba1225b1f9ffbef92a5342db809481220a16f48f1381ef947e8d7ff88ff497ca"} Feb 16 15:25:18 crc kubenswrapper[4835]: I0216 15:25:18.816623 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89ca92c6-cb91-49f1-a005-047759f93742","Type":"ContainerStarted","Data":"9b26ca5a56bae2a31dc8fc6d3e52eeeec4e29339696b801d817585f70ad7d5d5"} Feb 16 15:25:18 crc kubenswrapper[4835]: I0216 15:25:18.816634 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89ca92c6-cb91-49f1-a005-047759f93742","Type":"ContainerStarted","Data":"d9c43ae7ac9eb3bdf019c5c9b8c7d0cf872e5106e88a982059067a5a807b8b62"} Feb 16 15:25:19 crc kubenswrapper[4835]: I0216 15:25:19.840906 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89ca92c6-cb91-49f1-a005-047759f93742","Type":"ContainerStarted","Data":"73308fd20dbf0fa7c0dfcc7835fea45054772dcddcfa59150ff205b04c417b4d"} Feb 16 15:25:19 crc kubenswrapper[4835]: I0216 15:25:19.841251 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89ca92c6-cb91-49f1-a005-047759f93742","Type":"ContainerStarted","Data":"b99be0547baea30fc1f918c4d11e99fa0958e99278536157ad2820dc82205b8b"} Feb 16 15:25:19 crc kubenswrapper[4835]: I0216 15:25:19.843495 4835 generic.go:334] "Generic (PLEG): container finished" podID="12db908e-5604-4c20-bfa8-ee01f8bac719" containerID="e020718c964ec483f0caad7938898c6e69a1ecae07b4415440f14dd72065c661" exitCode=0 Feb 16 15:25:19 crc kubenswrapper[4835]: I0216 15:25:19.843591 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-chltq" event={"ID":"12db908e-5604-4c20-bfa8-ee01f8bac719","Type":"ContainerDied","Data":"e020718c964ec483f0caad7938898c6e69a1ecae07b4415440f14dd72065c661"} Feb 16 15:25:20 crc kubenswrapper[4835]: I0216 15:25:20.855514 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89ca92c6-cb91-49f1-a005-047759f93742","Type":"ContainerStarted","Data":"4dc97a0eb419e2bff84fa791c02764676f04270255bb9e757e448844307c6051"} Feb 16 15:25:20 crc kubenswrapper[4835]: I0216 15:25:20.856620 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89ca92c6-cb91-49f1-a005-047759f93742","Type":"ContainerStarted","Data":"808ebf584ad27741e50bc6bc90c6c27c01342c6b341274a455214fe336c45b2b"} Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.248649 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-st4vx" podUID="2efbff9d-b303-430c-b06c-36b79284a3f1" containerName="ovn-controller" probeResult="failure" output=< Feb 16 15:25:21 crc kubenswrapper[4835]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 15:25:21 crc kubenswrapper[4835]: > Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.298286 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kxl4t" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.463866 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-chltq" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.563363 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-st4vx-config-5tcqf"] Feb 16 15:25:21 crc kubenswrapper[4835]: E0216 15:25:21.563904 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12db908e-5604-4c20-bfa8-ee01f8bac719" containerName="glance-db-sync" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.563968 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="12db908e-5604-4c20-bfa8-ee01f8bac719" containerName="glance-db-sync" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.564207 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="12db908e-5604-4c20-bfa8-ee01f8bac719" containerName="glance-db-sync" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.564922 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.570597 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.576436 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-st4vx-config-5tcqf"] Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.608478 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-combined-ca-bundle\") pod \"12db908e-5604-4c20-bfa8-ee01f8bac719\" (UID: \"12db908e-5604-4c20-bfa8-ee01f8bac719\") " Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.608538 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-strmn\" (UniqueName: \"kubernetes.io/projected/12db908e-5604-4c20-bfa8-ee01f8bac719-kube-api-access-strmn\") pod \"12db908e-5604-4c20-bfa8-ee01f8bac719\" (UID: \"12db908e-5604-4c20-bfa8-ee01f8bac719\") " Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.608625 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-config-data\") pod \"12db908e-5604-4c20-bfa8-ee01f8bac719\" (UID: \"12db908e-5604-4c20-bfa8-ee01f8bac719\") " Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.608785 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-db-sync-config-data\") pod \"12db908e-5604-4c20-bfa8-ee01f8bac719\" (UID: \"12db908e-5604-4c20-bfa8-ee01f8bac719\") " Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.613104 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "12db908e-5604-4c20-bfa8-ee01f8bac719" (UID: "12db908e-5604-4c20-bfa8-ee01f8bac719"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.613166 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12db908e-5604-4c20-bfa8-ee01f8bac719-kube-api-access-strmn" (OuterVolumeSpecName: "kube-api-access-strmn") pod "12db908e-5604-4c20-bfa8-ee01f8bac719" (UID: "12db908e-5604-4c20-bfa8-ee01f8bac719"). InnerVolumeSpecName "kube-api-access-strmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.639475 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12db908e-5604-4c20-bfa8-ee01f8bac719" (UID: "12db908e-5604-4c20-bfa8-ee01f8bac719"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.658883 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-config-data" (OuterVolumeSpecName: "config-data") pod "12db908e-5604-4c20-bfa8-ee01f8bac719" (UID: "12db908e-5604-4c20-bfa8-ee01f8bac719"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.712916 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnmhm\" (UniqueName: \"kubernetes.io/projected/594dbde4-ed07-4c17-8a78-d8343568ad90-kube-api-access-lnmhm\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.713163 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/594dbde4-ed07-4c17-8a78-d8343568ad90-additional-scripts\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.713225 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-run\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.713243 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/594dbde4-ed07-4c17-8a78-d8343568ad90-scripts\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.713271 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-run-ovn\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.713550 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-log-ovn\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.713704 4835 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.713728 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.713740 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-strmn\" (UniqueName: \"kubernetes.io/projected/12db908e-5604-4c20-bfa8-ee01f8bac719-kube-api-access-strmn\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.713754 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12db908e-5604-4c20-bfa8-ee01f8bac719-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.815401 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-run\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.815442 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/594dbde4-ed07-4c17-8a78-d8343568ad90-scripts\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.815490 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-run-ovn\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.815590 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-log-ovn\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.815634 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnmhm\" (UniqueName: \"kubernetes.io/projected/594dbde4-ed07-4c17-8a78-d8343568ad90-kube-api-access-lnmhm\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.815768 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/594dbde4-ed07-4c17-8a78-d8343568ad90-additional-scripts\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.815812 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-run\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.815832 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-run-ovn\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.815819 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-log-ovn\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.818055 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/594dbde4-ed07-4c17-8a78-d8343568ad90-scripts\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.819134 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/594dbde4-ed07-4c17-8a78-d8343568ad90-additional-scripts\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.834118 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnmhm\" (UniqueName: \"kubernetes.io/projected/594dbde4-ed07-4c17-8a78-d8343568ad90-kube-api-access-lnmhm\") pod \"ovn-controller-st4vx-config-5tcqf\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.865459 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-chltq" event={"ID":"12db908e-5604-4c20-bfa8-ee01f8bac719","Type":"ContainerDied","Data":"f621106238418b7d0a0a58fc4fbcc5760a4d9a77be880cb01a1004b22550380c"} Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.865485 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-chltq" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.877258 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f621106238418b7d0a0a58fc4fbcc5760a4d9a77be880cb01a1004b22550380c" Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.879514 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89ca92c6-cb91-49f1-a005-047759f93742","Type":"ContainerStarted","Data":"47cf1bcdf4c2eba6ca70bb41465e0a6578b6ddb8d3335780afbdc904c0fd5c01"} Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.879571 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89ca92c6-cb91-49f1-a005-047759f93742","Type":"ContainerStarted","Data":"bf227c2ad29f287e950b960d34a91b5ea1a724280efc4413490ee79f778d15c2"} Feb 16 15:25:21 crc kubenswrapper[4835]: I0216 15:25:21.888025 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.164228 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-bbvh8"] Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.167461 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.190373 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-bbvh8"] Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.334582 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-bbvh8\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.334659 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd7d7\" (UniqueName: \"kubernetes.io/projected/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-kube-api-access-fd7d7\") pod \"dnsmasq-dns-5b946c75cc-bbvh8\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.334864 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-bbvh8\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.334949 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-bbvh8\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.335061 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-config\") pod \"dnsmasq-dns-5b946c75cc-bbvh8\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.439843 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-bbvh8\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.440163 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-config\") pod \"dnsmasq-dns-5b946c75cc-bbvh8\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.440282 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-bbvh8\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.440475 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd7d7\" (UniqueName: \"kubernetes.io/projected/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-kube-api-access-fd7d7\") pod \"dnsmasq-dns-5b946c75cc-bbvh8\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.440614 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-bbvh8\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.441779 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-config\") pod \"dnsmasq-dns-5b946c75cc-bbvh8\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.442813 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-bbvh8\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.446149 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-bbvh8\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.448248 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-bbvh8\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.461057 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd7d7\" (UniqueName: \"kubernetes.io/projected/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-kube-api-access-fd7d7\") pod \"dnsmasq-dns-5b946c75cc-bbvh8\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.555847 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.591261 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-st4vx-config-5tcqf"] Feb 16 15:25:22 crc kubenswrapper[4835]: W0216 15:25:22.606378 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod594dbde4_ed07_4c17_8a78_d8343568ad90.slice/crio-60f36278db8e46265e197f8e50864fbd4820e8a35359065678250719300148aa WatchSource:0}: Error finding container 60f36278db8e46265e197f8e50864fbd4820e8a35359065678250719300148aa: Status 404 returned error can't find the container with id 60f36278db8e46265e197f8e50864fbd4820e8a35359065678250719300148aa Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.889822 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-st4vx-config-5tcqf" event={"ID":"594dbde4-ed07-4c17-8a78-d8343568ad90","Type":"ContainerStarted","Data":"60f36278db8e46265e197f8e50864fbd4820e8a35359065678250719300148aa"} Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.899231 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89ca92c6-cb91-49f1-a005-047759f93742","Type":"ContainerStarted","Data":"8cbc82f178a3ebf2188f85924c772d46c864f09d9e75359fe703d61762b0c838"} Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.899282 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89ca92c6-cb91-49f1-a005-047759f93742","Type":"ContainerStarted","Data":"02407e72123c21117f94a98529efdfc3a4138abd78a446eafdb81c5440d27a9b"} Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.899296 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89ca92c6-cb91-49f1-a005-047759f93742","Type":"ContainerStarted","Data":"af336cf63ce9ba5ee0e3a2e2e6f4772a29c460ef81e677fe0f89fdff8c014dac"} Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.899309 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89ca92c6-cb91-49f1-a005-047759f93742","Type":"ContainerStarted","Data":"89770497d1d24df13ef8150656c69ffb129f4e4f25aeb1707e80c890fefc7f7b"} Feb 16 15:25:22 crc kubenswrapper[4835]: I0216 15:25:22.966143 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-bbvh8"] Feb 16 15:25:22 crc kubenswrapper[4835]: W0216 15:25:22.974423 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43eb7ef3_0c31_4cf9_833c_731cc3399f5d.slice/crio-4baf4cb40e231f4ea69cafdcaa764e1bf399925461162987d638e5629ba7e484 WatchSource:0}: Error finding container 4baf4cb40e231f4ea69cafdcaa764e1bf399925461162987d638e5629ba7e484: Status 404 returned error can't find the container with id 4baf4cb40e231f4ea69cafdcaa764e1bf399925461162987d638e5629ba7e484 Feb 16 15:25:23 crc kubenswrapper[4835]: I0216 15:25:23.254715 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="e246a943-0c6d-4738-8a73-d3e576819680" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:25:23 crc kubenswrapper[4835]: I0216 15:25:23.909781 4835 generic.go:334] "Generic (PLEG): container finished" podID="a20bb04f-11d1-4f24-a96d-2c451f98b8bd" containerID="59424e5950298a07610e42be8f431b558b16a1ae26074700db7771a9e159b17a" exitCode=0 Feb 16 15:25:23 crc kubenswrapper[4835]: I0216 15:25:23.909863 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a20bb04f-11d1-4f24-a96d-2c451f98b8bd","Type":"ContainerDied","Data":"59424e5950298a07610e42be8f431b558b16a1ae26074700db7771a9e159b17a"} Feb 16 15:25:23 crc kubenswrapper[4835]: I0216 15:25:23.916984 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89ca92c6-cb91-49f1-a005-047759f93742","Type":"ContainerStarted","Data":"38bffb7600e7d877b6e913937730e871d705b4fbb9f2bde13178f5a2a96925b5"} Feb 16 15:25:23 crc kubenswrapper[4835]: I0216 15:25:23.921756 4835 generic.go:334] "Generic (PLEG): container finished" podID="43eb7ef3-0c31-4cf9-833c-731cc3399f5d" containerID="db9e66beaa3a6ee4503e587600a3abc05bcbe7ef65dfcb908fa9ecdd90a1ee1e" exitCode=0 Feb 16 15:25:23 crc kubenswrapper[4835]: I0216 15:25:23.921858 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" event={"ID":"43eb7ef3-0c31-4cf9-833c-731cc3399f5d","Type":"ContainerDied","Data":"db9e66beaa3a6ee4503e587600a3abc05bcbe7ef65dfcb908fa9ecdd90a1ee1e"} Feb 16 15:25:23 crc kubenswrapper[4835]: I0216 15:25:23.921885 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" event={"ID":"43eb7ef3-0c31-4cf9-833c-731cc3399f5d","Type":"ContainerStarted","Data":"4baf4cb40e231f4ea69cafdcaa764e1bf399925461162987d638e5629ba7e484"} Feb 16 15:25:23 crc kubenswrapper[4835]: I0216 15:25:23.926265 4835 generic.go:334] "Generic (PLEG): container finished" podID="594dbde4-ed07-4c17-8a78-d8343568ad90" containerID="1b1b90ea0e2a380fd833e123941bd651ed72f6787e386fae69d5e9a55bf8e11c" exitCode=0 Feb 16 15:25:23 crc kubenswrapper[4835]: I0216 15:25:23.926312 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-st4vx-config-5tcqf" event={"ID":"594dbde4-ed07-4c17-8a78-d8343568ad90","Type":"ContainerDied","Data":"1b1b90ea0e2a380fd833e123941bd651ed72f6787e386fae69d5e9a55bf8e11c"} Feb 16 15:25:23 crc kubenswrapper[4835]: I0216 15:25:23.989033 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.327312291 podStartE2EDuration="41.989007507s" podCreationTimestamp="2026-02-16 15:24:42 +0000 UTC" firstStartedPulling="2026-02-16 15:25:16.645774996 +0000 UTC m=+1065.937767891" lastFinishedPulling="2026-02-16 15:25:21.307470212 +0000 UTC m=+1070.599463107" observedRunningTime="2026-02-16 15:25:23.976680357 +0000 UTC m=+1073.268673252" watchObservedRunningTime="2026-02-16 15:25:23.989007507 +0000 UTC m=+1073.281000402" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.265665 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-bbvh8"] Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.334239 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-d8tfl"] Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.335727 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.337646 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.352340 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-d8tfl"] Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.496927 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.496978 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.497173 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.497291 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqvv8\" (UniqueName: \"kubernetes.io/projected/5617ce0e-e930-4990-958c-0851d3e4c9fd-kube-api-access-kqvv8\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.497351 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.497399 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-config\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.598730 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.598825 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqvv8\" (UniqueName: \"kubernetes.io/projected/5617ce0e-e930-4990-958c-0851d3e4c9fd-kube-api-access-kqvv8\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.598877 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.598937 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-config\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.598980 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.599003 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.599712 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.599860 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-config\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.599895 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.600445 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.600480 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.632264 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqvv8\" (UniqueName: \"kubernetes.io/projected/5617ce0e-e930-4990-958c-0851d3e4c9fd-kube-api-access-kqvv8\") pod \"dnsmasq-dns-74f6bcbc87-d8tfl\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.649200 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.938829 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a20bb04f-11d1-4f24-a96d-2c451f98b8bd","Type":"ContainerStarted","Data":"c8f01dba9a270d8f721272aa1c4ead45f8bce8dbb4c77eaf123fda4696417d99"} Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.941845 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" event={"ID":"43eb7ef3-0c31-4cf9-833c-731cc3399f5d","Type":"ContainerStarted","Data":"e172e8c752b08037291bdb0f91c85e37584b1e69eaf01c0c257445c2580de50f"} Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.943327 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:24 crc kubenswrapper[4835]: I0216 15:25:24.964811 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" podStartSLOduration=2.964794327 podStartE2EDuration="2.964794327s" podCreationTimestamp="2026-02-16 15:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:25:24.959660824 +0000 UTC m=+1074.251653739" watchObservedRunningTime="2026-02-16 15:25:24.964794327 +0000 UTC m=+1074.256787222" Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.101764 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-d8tfl"] Feb 16 15:25:25 crc kubenswrapper[4835]: W0216 15:25:25.106260 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5617ce0e_e930_4990_958c_0851d3e4c9fd.slice/crio-9e2367077bf2c7a63110c25b4ee97b973fd69725f4e1c643ab2b8af4b75586fb WatchSource:0}: Error finding container 9e2367077bf2c7a63110c25b4ee97b973fd69725f4e1c643ab2b8af4b75586fb: Status 404 returned error can't find the container with id 9e2367077bf2c7a63110c25b4ee97b973fd69725f4e1c643ab2b8af4b75586fb Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.282082 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.412230 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-log-ovn\") pod \"594dbde4-ed07-4c17-8a78-d8343568ad90\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.412275 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-run\") pod \"594dbde4-ed07-4c17-8a78-d8343568ad90\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.412346 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnmhm\" (UniqueName: \"kubernetes.io/projected/594dbde4-ed07-4c17-8a78-d8343568ad90-kube-api-access-lnmhm\") pod \"594dbde4-ed07-4c17-8a78-d8343568ad90\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.412399 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "594dbde4-ed07-4c17-8a78-d8343568ad90" (UID: "594dbde4-ed07-4c17-8a78-d8343568ad90"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.412417 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-run" (OuterVolumeSpecName: "var-run") pod "594dbde4-ed07-4c17-8a78-d8343568ad90" (UID: "594dbde4-ed07-4c17-8a78-d8343568ad90"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.412992 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/594dbde4-ed07-4c17-8a78-d8343568ad90-additional-scripts\") pod \"594dbde4-ed07-4c17-8a78-d8343568ad90\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.413123 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-run-ovn\") pod \"594dbde4-ed07-4c17-8a78-d8343568ad90\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.413174 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/594dbde4-ed07-4c17-8a78-d8343568ad90-scripts\") pod \"594dbde4-ed07-4c17-8a78-d8343568ad90\" (UID: \"594dbde4-ed07-4c17-8a78-d8343568ad90\") " Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.413263 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "594dbde4-ed07-4c17-8a78-d8343568ad90" (UID: "594dbde4-ed07-4c17-8a78-d8343568ad90"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.413573 4835 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.413585 4835 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.413593 4835 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/594dbde4-ed07-4c17-8a78-d8343568ad90-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.413901 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/594dbde4-ed07-4c17-8a78-d8343568ad90-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "594dbde4-ed07-4c17-8a78-d8343568ad90" (UID: "594dbde4-ed07-4c17-8a78-d8343568ad90"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.414193 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/594dbde4-ed07-4c17-8a78-d8343568ad90-scripts" (OuterVolumeSpecName: "scripts") pod "594dbde4-ed07-4c17-8a78-d8343568ad90" (UID: "594dbde4-ed07-4c17-8a78-d8343568ad90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.416821 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594dbde4-ed07-4c17-8a78-d8343568ad90-kube-api-access-lnmhm" (OuterVolumeSpecName: "kube-api-access-lnmhm") pod "594dbde4-ed07-4c17-8a78-d8343568ad90" (UID: "594dbde4-ed07-4c17-8a78-d8343568ad90"). InnerVolumeSpecName "kube-api-access-lnmhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.515471 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/594dbde4-ed07-4c17-8a78-d8343568ad90-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.515499 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnmhm\" (UniqueName: \"kubernetes.io/projected/594dbde4-ed07-4c17-8a78-d8343568ad90-kube-api-access-lnmhm\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.515512 4835 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/594dbde4-ed07-4c17-8a78-d8343568ad90-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.955568 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-st4vx-config-5tcqf" event={"ID":"594dbde4-ed07-4c17-8a78-d8343568ad90","Type":"ContainerDied","Data":"60f36278db8e46265e197f8e50864fbd4820e8a35359065678250719300148aa"} Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.957096 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60f36278db8e46265e197f8e50864fbd4820e8a35359065678250719300148aa" Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.956307 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-st4vx-config-5tcqf" Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.959236 4835 generic.go:334] "Generic (PLEG): container finished" podID="5617ce0e-e930-4990-958c-0851d3e4c9fd" containerID="7f9aa8d2f495cdc8f6185cf70a6f2804e33a023f998de7f861b9f955a818aa01" exitCode=0 Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.959419 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" podUID="43eb7ef3-0c31-4cf9-833c-731cc3399f5d" containerName="dnsmasq-dns" containerID="cri-o://e172e8c752b08037291bdb0f91c85e37584b1e69eaf01c0c257445c2580de50f" gracePeriod=10 Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.960659 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" event={"ID":"5617ce0e-e930-4990-958c-0851d3e4c9fd","Type":"ContainerDied","Data":"7f9aa8d2f495cdc8f6185cf70a6f2804e33a023f998de7f861b9f955a818aa01"} Feb 16 15:25:25 crc kubenswrapper[4835]: I0216 15:25:25.960696 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" event={"ID":"5617ce0e-e930-4990-958c-0851d3e4c9fd","Type":"ContainerStarted","Data":"9e2367077bf2c7a63110c25b4ee97b973fd69725f4e1c643ab2b8af4b75586fb"} Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.278567 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-st4vx" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.378849 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-st4vx-config-5tcqf"] Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.386831 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-st4vx-config-5tcqf"] Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.506053 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-st4vx-config-jbs9s"] Feb 16 15:25:26 crc kubenswrapper[4835]: E0216 15:25:26.506741 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="594dbde4-ed07-4c17-8a78-d8343568ad90" containerName="ovn-config" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.506824 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="594dbde4-ed07-4c17-8a78-d8343568ad90" containerName="ovn-config" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.507119 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="594dbde4-ed07-4c17-8a78-d8343568ad90" containerName="ovn-config" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.507938 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.510417 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.521521 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-st4vx-config-jbs9s"] Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.550868 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.640323 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-ovsdbserver-sb\") pod \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.640789 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd7d7\" (UniqueName: \"kubernetes.io/projected/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-kube-api-access-fd7d7\") pod \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.640941 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-ovsdbserver-nb\") pod \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.641171 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-dns-svc\") pod \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.641246 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-config\") pod \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\" (UID: \"43eb7ef3-0c31-4cf9-833c-731cc3399f5d\") " Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.641503 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kjgv\" (UniqueName: \"kubernetes.io/projected/04ebe596-5b04-41e9-acab-61ee99c17107-kube-api-access-5kjgv\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.641639 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04ebe596-5b04-41e9-acab-61ee99c17107-scripts\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.641729 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-run\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.641825 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/04ebe596-5b04-41e9-acab-61ee99c17107-additional-scripts\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.641907 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-run-ovn\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.642018 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-log-ovn\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.645575 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-kube-api-access-fd7d7" (OuterVolumeSpecName: "kube-api-access-fd7d7") pod "43eb7ef3-0c31-4cf9-833c-731cc3399f5d" (UID: "43eb7ef3-0c31-4cf9-833c-731cc3399f5d"). InnerVolumeSpecName "kube-api-access-fd7d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.684861 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "43eb7ef3-0c31-4cf9-833c-731cc3399f5d" (UID: "43eb7ef3-0c31-4cf9-833c-731cc3399f5d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.687775 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "43eb7ef3-0c31-4cf9-833c-731cc3399f5d" (UID: "43eb7ef3-0c31-4cf9-833c-731cc3399f5d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.694936 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "43eb7ef3-0c31-4cf9-833c-731cc3399f5d" (UID: "43eb7ef3-0c31-4cf9-833c-731cc3399f5d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.708779 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-config" (OuterVolumeSpecName: "config") pod "43eb7ef3-0c31-4cf9-833c-731cc3399f5d" (UID: "43eb7ef3-0c31-4cf9-833c-731cc3399f5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.744006 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/04ebe596-5b04-41e9-acab-61ee99c17107-additional-scripts\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.744440 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-run-ovn\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.744511 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-log-ovn\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.744770 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kjgv\" (UniqueName: \"kubernetes.io/projected/04ebe596-5b04-41e9-acab-61ee99c17107-kube-api-access-5kjgv\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.744827 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04ebe596-5b04-41e9-acab-61ee99c17107-scripts\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.744870 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-run\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.744965 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.744989 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd7d7\" (UniqueName: \"kubernetes.io/projected/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-kube-api-access-fd7d7\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.745004 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.745001 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-run-ovn\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.745108 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-run\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.745017 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.745189 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43eb7ef3-0c31-4cf9-833c-731cc3399f5d-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.745109 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-log-ovn\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.745228 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/04ebe596-5b04-41e9-acab-61ee99c17107-additional-scripts\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.747247 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04ebe596-5b04-41e9-acab-61ee99c17107-scripts\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.761219 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kjgv\" (UniqueName: \"kubernetes.io/projected/04ebe596-5b04-41e9-acab-61ee99c17107-kube-api-access-5kjgv\") pod \"ovn-controller-st4vx-config-jbs9s\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.838725 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.975451 4835 generic.go:334] "Generic (PLEG): container finished" podID="43eb7ef3-0c31-4cf9-833c-731cc3399f5d" containerID="e172e8c752b08037291bdb0f91c85e37584b1e69eaf01c0c257445c2580de50f" exitCode=0 Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.975720 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.975697 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" event={"ID":"43eb7ef3-0c31-4cf9-833c-731cc3399f5d","Type":"ContainerDied","Data":"e172e8c752b08037291bdb0f91c85e37584b1e69eaf01c0c257445c2580de50f"} Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.976483 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-bbvh8" event={"ID":"43eb7ef3-0c31-4cf9-833c-731cc3399f5d","Type":"ContainerDied","Data":"4baf4cb40e231f4ea69cafdcaa764e1bf399925461162987d638e5629ba7e484"} Feb 16 15:25:26 crc kubenswrapper[4835]: I0216 15:25:26.976851 4835 scope.go:117] "RemoveContainer" containerID="e172e8c752b08037291bdb0f91c85e37584b1e69eaf01c0c257445c2580de50f" Feb 16 15:25:27 crc kubenswrapper[4835]: I0216 15:25:27.010830 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" event={"ID":"5617ce0e-e930-4990-958c-0851d3e4c9fd","Type":"ContainerStarted","Data":"53406c7f61382c61fdb4ab44dda81ab8647f1b31955f5f277040f27e0ca3071e"} Feb 16 15:25:27 crc kubenswrapper[4835]: I0216 15:25:27.014312 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:27 crc kubenswrapper[4835]: I0216 15:25:27.039028 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a20bb04f-11d1-4f24-a96d-2c451f98b8bd","Type":"ContainerStarted","Data":"782fe8ac89c269655be0e73970f917ab4d271e520355c276b1366227dc14c67b"} Feb 16 15:25:27 crc kubenswrapper[4835]: I0216 15:25:27.039076 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a20bb04f-11d1-4f24-a96d-2c451f98b8bd","Type":"ContainerStarted","Data":"949cd929b49f81e971a8d256615e363b6b38634816a8c4964d33d6cbef2c3cba"} Feb 16 15:25:27 crc kubenswrapper[4835]: I0216 15:25:27.056743 4835 scope.go:117] "RemoveContainer" containerID="db9e66beaa3a6ee4503e587600a3abc05bcbe7ef65dfcb908fa9ecdd90a1ee1e" Feb 16 15:25:27 crc kubenswrapper[4835]: I0216 15:25:27.095628 4835 scope.go:117] "RemoveContainer" containerID="e172e8c752b08037291bdb0f91c85e37584b1e69eaf01c0c257445c2580de50f" Feb 16 15:25:27 crc kubenswrapper[4835]: E0216 15:25:27.096910 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e172e8c752b08037291bdb0f91c85e37584b1e69eaf01c0c257445c2580de50f\": container with ID starting with e172e8c752b08037291bdb0f91c85e37584b1e69eaf01c0c257445c2580de50f not found: ID does not exist" containerID="e172e8c752b08037291bdb0f91c85e37584b1e69eaf01c0c257445c2580de50f" Feb 16 15:25:27 crc kubenswrapper[4835]: I0216 15:25:27.096942 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e172e8c752b08037291bdb0f91c85e37584b1e69eaf01c0c257445c2580de50f"} err="failed to get container status \"e172e8c752b08037291bdb0f91c85e37584b1e69eaf01c0c257445c2580de50f\": rpc error: code = NotFound desc = could not find container \"e172e8c752b08037291bdb0f91c85e37584b1e69eaf01c0c257445c2580de50f\": container with ID starting with e172e8c752b08037291bdb0f91c85e37584b1e69eaf01c0c257445c2580de50f not found: ID does not exist" Feb 16 15:25:27 crc kubenswrapper[4835]: I0216 15:25:27.096965 4835 scope.go:117] "RemoveContainer" containerID="db9e66beaa3a6ee4503e587600a3abc05bcbe7ef65dfcb908fa9ecdd90a1ee1e" Feb 16 15:25:27 crc kubenswrapper[4835]: I0216 15:25:27.097178 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" podStartSLOduration=3.097167374 podStartE2EDuration="3.097167374s" podCreationTimestamp="2026-02-16 15:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:25:27.039522039 +0000 UTC m=+1076.331514934" watchObservedRunningTime="2026-02-16 15:25:27.097167374 +0000 UTC m=+1076.389160269" Feb 16 15:25:27 crc kubenswrapper[4835]: E0216 15:25:27.097211 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db9e66beaa3a6ee4503e587600a3abc05bcbe7ef65dfcb908fa9ecdd90a1ee1e\": container with ID starting with db9e66beaa3a6ee4503e587600a3abc05bcbe7ef65dfcb908fa9ecdd90a1ee1e not found: ID does not exist" containerID="db9e66beaa3a6ee4503e587600a3abc05bcbe7ef65dfcb908fa9ecdd90a1ee1e" Feb 16 15:25:27 crc kubenswrapper[4835]: I0216 15:25:27.097245 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db9e66beaa3a6ee4503e587600a3abc05bcbe7ef65dfcb908fa9ecdd90a1ee1e"} err="failed to get container status \"db9e66beaa3a6ee4503e587600a3abc05bcbe7ef65dfcb908fa9ecdd90a1ee1e\": rpc error: code = NotFound desc = could not find container \"db9e66beaa3a6ee4503e587600a3abc05bcbe7ef65dfcb908fa9ecdd90a1ee1e\": container with ID starting with db9e66beaa3a6ee4503e587600a3abc05bcbe7ef65dfcb908fa9ecdd90a1ee1e not found: ID does not exist" Feb 16 15:25:27 crc kubenswrapper[4835]: I0216 15:25:27.121627 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-bbvh8"] Feb 16 15:25:27 crc kubenswrapper[4835]: I0216 15:25:27.131851 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-bbvh8"] Feb 16 15:25:27 crc kubenswrapper[4835]: I0216 15:25:27.132912 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=14.13288816 podStartE2EDuration="14.13288816s" podCreationTimestamp="2026-02-16 15:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:25:27.09624089 +0000 UTC m=+1076.388233795" watchObservedRunningTime="2026-02-16 15:25:27.13288816 +0000 UTC m=+1076.424881065" Feb 16 15:25:27 crc kubenswrapper[4835]: I0216 15:25:27.342376 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-st4vx-config-jbs9s"] Feb 16 15:25:27 crc kubenswrapper[4835]: I0216 15:25:27.392801 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43eb7ef3-0c31-4cf9-833c-731cc3399f5d" path="/var/lib/kubelet/pods/43eb7ef3-0c31-4cf9-833c-731cc3399f5d/volumes" Feb 16 15:25:27 crc kubenswrapper[4835]: I0216 15:25:27.393355 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="594dbde4-ed07-4c17-8a78-d8343568ad90" path="/var/lib/kubelet/pods/594dbde4-ed07-4c17-8a78-d8343568ad90/volumes" Feb 16 15:25:28 crc kubenswrapper[4835]: I0216 15:25:28.048274 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9gx68" event={"ID":"99ac121e-3070-48a4-94df-938421346b96","Type":"ContainerStarted","Data":"5bbbc79bd8c474bc62ee085349da0604456d84cfb63fb7d0e31661454fa854a5"} Feb 16 15:25:28 crc kubenswrapper[4835]: I0216 15:25:28.061463 4835 generic.go:334] "Generic (PLEG): container finished" podID="04ebe596-5b04-41e9-acab-61ee99c17107" containerID="6f35b66214973307f150dff8aa34629732910d43582e310773633a1c678723fc" exitCode=0 Feb 16 15:25:28 crc kubenswrapper[4835]: I0216 15:25:28.061518 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-st4vx-config-jbs9s" event={"ID":"04ebe596-5b04-41e9-acab-61ee99c17107","Type":"ContainerDied","Data":"6f35b66214973307f150dff8aa34629732910d43582e310773633a1c678723fc"} Feb 16 15:25:28 crc kubenswrapper[4835]: I0216 15:25:28.061574 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-st4vx-config-jbs9s" event={"ID":"04ebe596-5b04-41e9-acab-61ee99c17107","Type":"ContainerStarted","Data":"2da44b1014cf0e5d2018125c52a7df731ba940c22f6fbc2914541c1510182073"} Feb 16 15:25:28 crc kubenswrapper[4835]: I0216 15:25:28.074749 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-9gx68" podStartSLOduration=3.665368435 podStartE2EDuration="31.07472902s" podCreationTimestamp="2026-02-16 15:24:57 +0000 UTC" firstStartedPulling="2026-02-16 15:25:00.396258247 +0000 UTC m=+1049.688251142" lastFinishedPulling="2026-02-16 15:25:27.805618822 +0000 UTC m=+1077.097611727" observedRunningTime="2026-02-16 15:25:28.069185676 +0000 UTC m=+1077.361178571" watchObservedRunningTime="2026-02-16 15:25:28.07472902 +0000 UTC m=+1077.366721915" Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.235934 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.235980 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.242658 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.456973 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.555750 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-run\") pod \"04ebe596-5b04-41e9-acab-61ee99c17107\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.555861 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-run" (OuterVolumeSpecName: "var-run") pod "04ebe596-5b04-41e9-acab-61ee99c17107" (UID: "04ebe596-5b04-41e9-acab-61ee99c17107"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.555885 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kjgv\" (UniqueName: \"kubernetes.io/projected/04ebe596-5b04-41e9-acab-61ee99c17107-kube-api-access-5kjgv\") pod \"04ebe596-5b04-41e9-acab-61ee99c17107\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.555931 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-log-ovn\") pod \"04ebe596-5b04-41e9-acab-61ee99c17107\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.555969 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-run-ovn\") pod \"04ebe596-5b04-41e9-acab-61ee99c17107\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.555994 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "04ebe596-5b04-41e9-acab-61ee99c17107" (UID: "04ebe596-5b04-41e9-acab-61ee99c17107"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.556047 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04ebe596-5b04-41e9-acab-61ee99c17107-scripts\") pod \"04ebe596-5b04-41e9-acab-61ee99c17107\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.556086 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "04ebe596-5b04-41e9-acab-61ee99c17107" (UID: "04ebe596-5b04-41e9-acab-61ee99c17107"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.556134 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/04ebe596-5b04-41e9-acab-61ee99c17107-additional-scripts\") pod \"04ebe596-5b04-41e9-acab-61ee99c17107\" (UID: \"04ebe596-5b04-41e9-acab-61ee99c17107\") " Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.556682 4835 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.556704 4835 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.556716 4835 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04ebe596-5b04-41e9-acab-61ee99c17107-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.557049 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ebe596-5b04-41e9-acab-61ee99c17107-scripts" (OuterVolumeSpecName: "scripts") pod "04ebe596-5b04-41e9-acab-61ee99c17107" (UID: "04ebe596-5b04-41e9-acab-61ee99c17107"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.557344 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ebe596-5b04-41e9-acab-61ee99c17107-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "04ebe596-5b04-41e9-acab-61ee99c17107" (UID: "04ebe596-5b04-41e9-acab-61ee99c17107"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.561513 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ebe596-5b04-41e9-acab-61ee99c17107-kube-api-access-5kjgv" (OuterVolumeSpecName: "kube-api-access-5kjgv") pod "04ebe596-5b04-41e9-acab-61ee99c17107" (UID: "04ebe596-5b04-41e9-acab-61ee99c17107"). InnerVolumeSpecName "kube-api-access-5kjgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.658048 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04ebe596-5b04-41e9-acab-61ee99c17107-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.658079 4835 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/04ebe596-5b04-41e9-acab-61ee99c17107-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:29 crc kubenswrapper[4835]: I0216 15:25:29.658092 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kjgv\" (UniqueName: \"kubernetes.io/projected/04ebe596-5b04-41e9-acab-61ee99c17107-kube-api-access-5kjgv\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:30 crc kubenswrapper[4835]: I0216 15:25:30.083952 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-st4vx-config-jbs9s" Feb 16 15:25:30 crc kubenswrapper[4835]: I0216 15:25:30.083941 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-st4vx-config-jbs9s" event={"ID":"04ebe596-5b04-41e9-acab-61ee99c17107","Type":"ContainerDied","Data":"2da44b1014cf0e5d2018125c52a7df731ba940c22f6fbc2914541c1510182073"} Feb 16 15:25:30 crc kubenswrapper[4835]: I0216 15:25:30.084028 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2da44b1014cf0e5d2018125c52a7df731ba940c22f6fbc2914541c1510182073" Feb 16 15:25:30 crc kubenswrapper[4835]: I0216 15:25:30.089389 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 15:25:30 crc kubenswrapper[4835]: I0216 15:25:30.527332 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-st4vx-config-jbs9s"] Feb 16 15:25:30 crc kubenswrapper[4835]: I0216 15:25:30.537339 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-st4vx-config-jbs9s"] Feb 16 15:25:31 crc kubenswrapper[4835]: I0216 15:25:31.097275 4835 generic.go:334] "Generic (PLEG): container finished" podID="99ac121e-3070-48a4-94df-938421346b96" containerID="5bbbc79bd8c474bc62ee085349da0604456d84cfb63fb7d0e31661454fa854a5" exitCode=0 Feb 16 15:25:31 crc kubenswrapper[4835]: I0216 15:25:31.097372 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9gx68" event={"ID":"99ac121e-3070-48a4-94df-938421346b96","Type":"ContainerDied","Data":"5bbbc79bd8c474bc62ee085349da0604456d84cfb63fb7d0e31661454fa854a5"} Feb 16 15:25:31 crc kubenswrapper[4835]: I0216 15:25:31.395007 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ebe596-5b04-41e9-acab-61ee99c17107" path="/var/lib/kubelet/pods/04ebe596-5b04-41e9-acab-61ee99c17107/volumes" Feb 16 15:25:32 crc kubenswrapper[4835]: I0216 15:25:32.465381 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9gx68" Feb 16 15:25:32 crc kubenswrapper[4835]: I0216 15:25:32.648840 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ac121e-3070-48a4-94df-938421346b96-config-data\") pod \"99ac121e-3070-48a4-94df-938421346b96\" (UID: \"99ac121e-3070-48a4-94df-938421346b96\") " Feb 16 15:25:32 crc kubenswrapper[4835]: I0216 15:25:32.648892 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ac121e-3070-48a4-94df-938421346b96-combined-ca-bundle\") pod \"99ac121e-3070-48a4-94df-938421346b96\" (UID: \"99ac121e-3070-48a4-94df-938421346b96\") " Feb 16 15:25:32 crc kubenswrapper[4835]: I0216 15:25:32.649083 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j92g5\" (UniqueName: \"kubernetes.io/projected/99ac121e-3070-48a4-94df-938421346b96-kube-api-access-j92g5\") pod \"99ac121e-3070-48a4-94df-938421346b96\" (UID: \"99ac121e-3070-48a4-94df-938421346b96\") " Feb 16 15:25:32 crc kubenswrapper[4835]: I0216 15:25:32.654344 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99ac121e-3070-48a4-94df-938421346b96-kube-api-access-j92g5" (OuterVolumeSpecName: "kube-api-access-j92g5") pod "99ac121e-3070-48a4-94df-938421346b96" (UID: "99ac121e-3070-48a4-94df-938421346b96"). InnerVolumeSpecName "kube-api-access-j92g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:32 crc kubenswrapper[4835]: I0216 15:25:32.675566 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99ac121e-3070-48a4-94df-938421346b96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99ac121e-3070-48a4-94df-938421346b96" (UID: "99ac121e-3070-48a4-94df-938421346b96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:32 crc kubenswrapper[4835]: I0216 15:25:32.709219 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99ac121e-3070-48a4-94df-938421346b96-config-data" (OuterVolumeSpecName: "config-data") pod "99ac121e-3070-48a4-94df-938421346b96" (UID: "99ac121e-3070-48a4-94df-938421346b96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:32 crc kubenswrapper[4835]: I0216 15:25:32.751972 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j92g5\" (UniqueName: \"kubernetes.io/projected/99ac121e-3070-48a4-94df-938421346b96-kube-api-access-j92g5\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:32 crc kubenswrapper[4835]: I0216 15:25:32.752033 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ac121e-3070-48a4-94df-938421346b96-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:32 crc kubenswrapper[4835]: I0216 15:25:32.752047 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ac121e-3070-48a4-94df-938421346b96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.115700 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-9gx68" event={"ID":"99ac121e-3070-48a4-94df-938421346b96","Type":"ContainerDied","Data":"d32fed931e451ef3416ab281340325da8e9e244ce32f64b88696140d217dd344"} Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.115839 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d32fed931e451ef3416ab281340325da8e9e244ce32f64b88696140d217dd344" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.115738 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-9gx68" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.249044 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.410018 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-56m6p"] Feb 16 15:25:33 crc kubenswrapper[4835]: E0216 15:25:33.410516 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ac121e-3070-48a4-94df-938421346b96" containerName="keystone-db-sync" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.410558 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ac121e-3070-48a4-94df-938421346b96" containerName="keystone-db-sync" Feb 16 15:25:33 crc kubenswrapper[4835]: E0216 15:25:33.410573 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43eb7ef3-0c31-4cf9-833c-731cc3399f5d" containerName="init" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.410581 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="43eb7ef3-0c31-4cf9-833c-731cc3399f5d" containerName="init" Feb 16 15:25:33 crc kubenswrapper[4835]: E0216 15:25:33.410609 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43eb7ef3-0c31-4cf9-833c-731cc3399f5d" containerName="dnsmasq-dns" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.410619 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="43eb7ef3-0c31-4cf9-833c-731cc3399f5d" containerName="dnsmasq-dns" Feb 16 15:25:33 crc kubenswrapper[4835]: E0216 15:25:33.410632 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ebe596-5b04-41e9-acab-61ee99c17107" containerName="ovn-config" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.410639 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ebe596-5b04-41e9-acab-61ee99c17107" containerName="ovn-config" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.410859 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="43eb7ef3-0c31-4cf9-833c-731cc3399f5d" containerName="dnsmasq-dns" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.410884 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ebe596-5b04-41e9-acab-61ee99c17107" containerName="ovn-config" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.410894 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ac121e-3070-48a4-94df-938421346b96" containerName="keystone-db-sync" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.411804 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.416089 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.416302 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.422453 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-d8tfl"] Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.422763 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" podUID="5617ce0e-e930-4990-958c-0851d3e4c9fd" containerName="dnsmasq-dns" containerID="cri-o://53406c7f61382c61fdb4ab44dda81ab8647f1b31955f5f277040f27e0ca3071e" gracePeriod=10 Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.425729 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-78dkx" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.426002 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.426102 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.425740 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.456398 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-56m6p"] Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.484052 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-fpjxt"] Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.494141 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.512614 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-fpjxt"] Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.574226 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-fernet-keys\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.574308 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-combined-ca-bundle\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.574328 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2tcw\" (UniqueName: \"kubernetes.io/projected/00c836f9-e279-4ced-bc58-0e55369710fa-kube-api-access-j2tcw\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.574376 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-config-data\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.574415 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-scripts\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.574444 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-credential-keys\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.675862 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-combined-ca-bundle\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.675906 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2tcw\" (UniqueName: \"kubernetes.io/projected/00c836f9-e279-4ced-bc58-0e55369710fa-kube-api-access-j2tcw\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.675945 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.675975 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xzt8\" (UniqueName: \"kubernetes.io/projected/48931b17-9b19-44ab-add0-d7ea5f823412-kube-api-access-2xzt8\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.676001 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-config-data\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.676040 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.676056 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-scripts\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.676085 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-credential-keys\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.676108 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-config\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.676133 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-fernet-keys\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.676156 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.676194 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-dns-svc\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.700685 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-fernet-keys\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.700997 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-credential-keys\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.705218 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-config-data\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.706290 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-combined-ca-bundle\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.710945 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-scripts\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.726487 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-bmq2c"] Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.737133 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.770140 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-d22pp" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.770760 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.770885 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.777789 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-bmq2c"] Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.778720 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.778769 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-config\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.778807 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.778845 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-dns-svc\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.778899 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.778927 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xzt8\" (UniqueName: \"kubernetes.io/projected/48931b17-9b19-44ab-add0-d7ea5f823412-kube-api-access-2xzt8\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.780083 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.780677 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-config\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.781185 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.781682 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-dns-svc\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.782169 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.785456 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2tcw\" (UniqueName: \"kubernetes.io/projected/00c836f9-e279-4ced-bc58-0e55369710fa-kube-api-access-j2tcw\") pod \"keystone-bootstrap-56m6p\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.799263 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-lkz2b"] Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.800464 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lkz2b" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.820955 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-tbkzn" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.821195 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.837094 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.850607 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lkz2b"] Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.882147 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhs8n\" (UniqueName: \"kubernetes.io/projected/eeb4f111-43c6-46d3-aa98-82d93b71b723-kube-api-access-nhs8n\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.882215 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-combined-ca-bundle\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.882236 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-db-sync-config-data\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.882257 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-config-data\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.882441 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-scripts\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.882516 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eeb4f111-43c6-46d3-aa98-82d93b71b723-etc-machine-id\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.884214 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xzt8\" (UniqueName: \"kubernetes.io/projected/48931b17-9b19-44ab-add0-d7ea5f823412-kube-api-access-2xzt8\") pod \"dnsmasq-dns-847c4cc679-fpjxt\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.893098 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.935618 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.949086 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.971029 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.971495 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.990734 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhs8n\" (UniqueName: \"kubernetes.io/projected/eeb4f111-43c6-46d3-aa98-82d93b71b723-kube-api-access-nhs8n\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.990798 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6dd766-e3c2-4559-920f-b39e5fde5526-combined-ca-bundle\") pod \"neutron-db-sync-lkz2b\" (UID: \"7b6dd766-e3c2-4559-920f-b39e5fde5526\") " pod="openstack/neutron-db-sync-lkz2b" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.990842 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-combined-ca-bundle\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.990867 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-db-sync-config-data\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.990897 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-config-data\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.990945 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b6dd766-e3c2-4559-920f-b39e5fde5526-config\") pod \"neutron-db-sync-lkz2b\" (UID: \"7b6dd766-e3c2-4559-920f-b39e5fde5526\") " pod="openstack/neutron-db-sync-lkz2b" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.991040 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-scripts\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.991061 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx4gl\" (UniqueName: \"kubernetes.io/projected/7b6dd766-e3c2-4559-920f-b39e5fde5526-kube-api-access-vx4gl\") pod \"neutron-db-sync-lkz2b\" (UID: \"7b6dd766-e3c2-4559-920f-b39e5fde5526\") " pod="openstack/neutron-db-sync-lkz2b" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.992247 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eeb4f111-43c6-46d3-aa98-82d93b71b723-etc-machine-id\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.992472 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eeb4f111-43c6-46d3-aa98-82d93b71b723-etc-machine-id\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:33 crc kubenswrapper[4835]: I0216 15:25:33.998386 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-db-sync-config-data\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:33.998527 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-config-data\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:33.999962 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-scripts\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.009895 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-combined-ca-bundle\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.030484 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.057000 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.084909 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhs8n\" (UniqueName: \"kubernetes.io/projected/eeb4f111-43c6-46d3-aa98-82d93b71b723-kube-api-access-nhs8n\") pod \"cinder-db-sync-bmq2c\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.095496 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6dd766-e3c2-4559-920f-b39e5fde5526-combined-ca-bundle\") pod \"neutron-db-sync-lkz2b\" (UID: \"7b6dd766-e3c2-4559-920f-b39e5fde5526\") " pod="openstack/neutron-db-sync-lkz2b" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.095572 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-config-data\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.095593 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zslkt\" (UniqueName: \"kubernetes.io/projected/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-kube-api-access-zslkt\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.095639 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-run-httpd\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.095671 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b6dd766-e3c2-4559-920f-b39e5fde5526-config\") pod \"neutron-db-sync-lkz2b\" (UID: \"7b6dd766-e3c2-4559-920f-b39e5fde5526\") " pod="openstack/neutron-db-sync-lkz2b" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.095734 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx4gl\" (UniqueName: \"kubernetes.io/projected/7b6dd766-e3c2-4559-920f-b39e5fde5526-kube-api-access-vx4gl\") pod \"neutron-db-sync-lkz2b\" (UID: \"7b6dd766-e3c2-4559-920f-b39e5fde5526\") " pod="openstack/neutron-db-sync-lkz2b" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.095754 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.095780 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-log-httpd\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.095810 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-scripts\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.095825 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.106350 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b6dd766-e3c2-4559-920f-b39e5fde5526-config\") pod \"neutron-db-sync-lkz2b\" (UID: \"7b6dd766-e3c2-4559-920f-b39e5fde5526\") " pod="openstack/neutron-db-sync-lkz2b" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.109804 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6dd766-e3c2-4559-920f-b39e5fde5526-combined-ca-bundle\") pod \"neutron-db-sync-lkz2b\" (UID: \"7b6dd766-e3c2-4559-920f-b39e5fde5526\") " pod="openstack/neutron-db-sync-lkz2b" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.163507 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx4gl\" (UniqueName: \"kubernetes.io/projected/7b6dd766-e3c2-4559-920f-b39e5fde5526-kube-api-access-vx4gl\") pod \"neutron-db-sync-lkz2b\" (UID: \"7b6dd766-e3c2-4559-920f-b39e5fde5526\") " pod="openstack/neutron-db-sync-lkz2b" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.170480 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8fld2"] Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.172046 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.190236 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.190433 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2fddv" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.190689 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.200990 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-config-data\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.201026 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zslkt\" (UniqueName: \"kubernetes.io/projected/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-kube-api-access-zslkt\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.201059 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-run-httpd\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.201122 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.201145 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-log-httpd\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.201174 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-scripts\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.201187 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.202191 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-log-httpd\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.202745 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-run-httpd\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.218081 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.225809 4835 generic.go:334] "Generic (PLEG): container finished" podID="5617ce0e-e930-4990-958c-0851d3e4c9fd" containerID="53406c7f61382c61fdb4ab44dda81ab8647f1b31955f5f277040f27e0ca3071e" exitCode=0 Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.225854 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" event={"ID":"5617ce0e-e930-4990-958c-0851d3e4c9fd","Type":"ContainerDied","Data":"53406c7f61382c61fdb4ab44dda81ab8647f1b31955f5f277040f27e0ca3071e"} Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.245601 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.245655 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-sgzmb"] Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.246557 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-scripts\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.247239 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-sgzmb" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.248428 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.258605 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8fld2"] Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.259943 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-config-data\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.263742 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.270048 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.270147 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zslkt\" (UniqueName: \"kubernetes.io/projected/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-kube-api-access-zslkt\") pod \"ceilometer-0\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.270300 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.270677 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-gcfcs" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.291086 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-sgzmb"] Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.293982 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lkz2b" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.306311 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-fpjxt"] Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.308491 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-combined-ca-bundle\") pod \"placement-db-sync-8fld2\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.308553 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca5b397-cd76-4bc1-9552-caecb0f37375-logs\") pod \"placement-db-sync-8fld2\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.308575 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw79k\" (UniqueName: \"kubernetes.io/projected/7ca5b397-cd76-4bc1-9552-caecb0f37375-kube-api-access-fw79k\") pod \"placement-db-sync-8fld2\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.308630 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-config-data\") pod \"placement-db-sync-8fld2\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.308686 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-scripts\") pod \"placement-db-sync-8fld2\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.336028 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-7zndp"] Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.337345 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7zndp" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.340385 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.340689 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qxhm8" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.367986 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.413910 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7zndp"] Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.417334 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1-combined-ca-bundle\") pod \"cloudkitty-db-sync-sgzmb\" (UID: \"3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1\") " pod="openstack/cloudkitty-db-sync-sgzmb" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.417400 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-config-data\") pod \"placement-db-sync-8fld2\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.418727 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1-scripts\") pod \"cloudkitty-db-sync-sgzmb\" (UID: \"3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1\") " pod="openstack/cloudkitty-db-sync-sgzmb" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.418938 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-scripts\") pod \"placement-db-sync-8fld2\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.419118 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqdtg\" (UniqueName: \"kubernetes.io/projected/3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1-kube-api-access-lqdtg\") pod \"cloudkitty-db-sync-sgzmb\" (UID: \"3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1\") " pod="openstack/cloudkitty-db-sync-sgzmb" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.419170 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1-config-data\") pod \"cloudkitty-db-sync-sgzmb\" (UID: \"3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1\") " pod="openstack/cloudkitty-db-sync-sgzmb" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.419294 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-combined-ca-bundle\") pod \"placement-db-sync-8fld2\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.419349 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1-certs\") pod \"cloudkitty-db-sync-sgzmb\" (UID: \"3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1\") " pod="openstack/cloudkitty-db-sync-sgzmb" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.419405 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca5b397-cd76-4bc1-9552-caecb0f37375-logs\") pod \"placement-db-sync-8fld2\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.419437 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw79k\" (UniqueName: \"kubernetes.io/projected/7ca5b397-cd76-4bc1-9552-caecb0f37375-kube-api-access-fw79k\") pod \"placement-db-sync-8fld2\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.428331 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-config-data\") pod \"placement-db-sync-8fld2\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.431730 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-combined-ca-bundle\") pod \"placement-db-sync-8fld2\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.456984 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca5b397-cd76-4bc1-9552-caecb0f37375-logs\") pod \"placement-db-sync-8fld2\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.466078 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw79k\" (UniqueName: \"kubernetes.io/projected/7ca5b397-cd76-4bc1-9552-caecb0f37375-kube-api-access-fw79k\") pod \"placement-db-sync-8fld2\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.468970 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-scripts\") pod \"placement-db-sync-8fld2\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.492775 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-6rgj6"] Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.502242 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.507970 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.518229 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-6rgj6"] Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.521442 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1-certs\") pod \"cloudkitty-db-sync-sgzmb\" (UID: \"3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1\") " pod="openstack/cloudkitty-db-sync-sgzmb" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.521497 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz4nm\" (UniqueName: \"kubernetes.io/projected/f8d68cbc-724d-490f-ae49-654aac2eb8ba-kube-api-access-dz4nm\") pod \"barbican-db-sync-7zndp\" (UID: \"f8d68cbc-724d-490f-ae49-654aac2eb8ba\") " pod="openstack/barbican-db-sync-7zndp" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.521597 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d68cbc-724d-490f-ae49-654aac2eb8ba-combined-ca-bundle\") pod \"barbican-db-sync-7zndp\" (UID: \"f8d68cbc-724d-490f-ae49-654aac2eb8ba\") " pod="openstack/barbican-db-sync-7zndp" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.521619 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1-combined-ca-bundle\") pod \"cloudkitty-db-sync-sgzmb\" (UID: \"3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1\") " pod="openstack/cloudkitty-db-sync-sgzmb" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.521642 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1-scripts\") pod \"cloudkitty-db-sync-sgzmb\" (UID: \"3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1\") " pod="openstack/cloudkitty-db-sync-sgzmb" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.521666 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f8d68cbc-724d-490f-ae49-654aac2eb8ba-db-sync-config-data\") pod \"barbican-db-sync-7zndp\" (UID: \"f8d68cbc-724d-490f-ae49-654aac2eb8ba\") " pod="openstack/barbican-db-sync-7zndp" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.521733 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqdtg\" (UniqueName: \"kubernetes.io/projected/3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1-kube-api-access-lqdtg\") pod \"cloudkitty-db-sync-sgzmb\" (UID: \"3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1\") " pod="openstack/cloudkitty-db-sync-sgzmb" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.521754 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1-config-data\") pod \"cloudkitty-db-sync-sgzmb\" (UID: \"3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1\") " pod="openstack/cloudkitty-db-sync-sgzmb" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.525702 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1-certs\") pod \"cloudkitty-db-sync-sgzmb\" (UID: \"3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1\") " pod="openstack/cloudkitty-db-sync-sgzmb" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.526021 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1-config-data\") pod \"cloudkitty-db-sync-sgzmb\" (UID: \"3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1\") " pod="openstack/cloudkitty-db-sync-sgzmb" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.529659 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1-scripts\") pod \"cloudkitty-db-sync-sgzmb\" (UID: \"3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1\") " pod="openstack/cloudkitty-db-sync-sgzmb" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.529734 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1-combined-ca-bundle\") pod \"cloudkitty-db-sync-sgzmb\" (UID: \"3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1\") " pod="openstack/cloudkitty-db-sync-sgzmb" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.552038 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqdtg\" (UniqueName: \"kubernetes.io/projected/3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1-kube-api-access-lqdtg\") pod \"cloudkitty-db-sync-sgzmb\" (UID: \"3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1\") " pod="openstack/cloudkitty-db-sync-sgzmb" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.552430 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.610486 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-sgzmb" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.622835 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:25:34 crc kubenswrapper[4835]: E0216 15:25:34.623235 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5617ce0e-e930-4990-958c-0851d3e4c9fd" containerName="dnsmasq-dns" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.623247 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5617ce0e-e930-4990-958c-0851d3e4c9fd" containerName="dnsmasq-dns" Feb 16 15:25:34 crc kubenswrapper[4835]: E0216 15:25:34.623275 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5617ce0e-e930-4990-958c-0851d3e4c9fd" containerName="init" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.623281 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5617ce0e-e930-4990-958c-0851d3e4c9fd" containerName="init" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.623459 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="5617ce0e-e930-4990-958c-0851d3e4c9fd" containerName="dnsmasq-dns" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.625013 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.623459 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-dns-swift-storage-0\") pod \"5617ce0e-e930-4990-958c-0851d3e4c9fd\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.626047 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-config\") pod \"5617ce0e-e930-4990-958c-0851d3e4c9fd\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.626127 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-ovsdbserver-nb\") pod \"5617ce0e-e930-4990-958c-0851d3e4c9fd\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.626150 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqvv8\" (UniqueName: \"kubernetes.io/projected/5617ce0e-e930-4990-958c-0851d3e4c9fd-kube-api-access-kqvv8\") pod \"5617ce0e-e930-4990-958c-0851d3e4c9fd\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.626242 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-dns-svc\") pod \"5617ce0e-e930-4990-958c-0851d3e4c9fd\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.626267 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-ovsdbserver-sb\") pod \"5617ce0e-e930-4990-958c-0851d3e4c9fd\" (UID: \"5617ce0e-e930-4990-958c-0851d3e4c9fd\") " Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.626529 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.626581 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz4nm\" (UniqueName: \"kubernetes.io/projected/f8d68cbc-724d-490f-ae49-654aac2eb8ba-kube-api-access-dz4nm\") pod \"barbican-db-sync-7zndp\" (UID: \"f8d68cbc-724d-490f-ae49-654aac2eb8ba\") " pod="openstack/barbican-db-sync-7zndp" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.626706 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d68cbc-724d-490f-ae49-654aac2eb8ba-combined-ca-bundle\") pod \"barbican-db-sync-7zndp\" (UID: \"f8d68cbc-724d-490f-ae49-654aac2eb8ba\") " pod="openstack/barbican-db-sync-7zndp" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.626766 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.626815 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f8d68cbc-724d-490f-ae49-654aac2eb8ba-db-sync-config-data\") pod \"barbican-db-sync-7zndp\" (UID: \"f8d68cbc-724d-490f-ae49-654aac2eb8ba\") " pod="openstack/barbican-db-sync-7zndp" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.626936 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-config\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.626984 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.627014 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.627062 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nnbr\" (UniqueName: \"kubernetes.io/projected/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-kube-api-access-6nnbr\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.639441 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.640060 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-v9jjp" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.640304 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.640415 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.640520 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.647568 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f8d68cbc-724d-490f-ae49-654aac2eb8ba-db-sync-config-data\") pod \"barbican-db-sync-7zndp\" (UID: \"f8d68cbc-724d-490f-ae49-654aac2eb8ba\") " pod="openstack/barbican-db-sync-7zndp" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.671789 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d68cbc-724d-490f-ae49-654aac2eb8ba-combined-ca-bundle\") pod \"barbican-db-sync-7zndp\" (UID: \"f8d68cbc-724d-490f-ae49-654aac2eb8ba\") " pod="openstack/barbican-db-sync-7zndp" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.689327 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5617ce0e-e930-4990-958c-0851d3e4c9fd-kube-api-access-kqvv8" (OuterVolumeSpecName: "kube-api-access-kqvv8") pod "5617ce0e-e930-4990-958c-0851d3e4c9fd" (UID: "5617ce0e-e930-4990-958c-0851d3e4c9fd"). InnerVolumeSpecName "kube-api-access-kqvv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.695032 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz4nm\" (UniqueName: \"kubernetes.io/projected/f8d68cbc-724d-490f-ae49-654aac2eb8ba-kube-api-access-dz4nm\") pod \"barbican-db-sync-7zndp\" (UID: \"f8d68cbc-724d-490f-ae49-654aac2eb8ba\") " pod="openstack/barbican-db-sync-7zndp" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.728554 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.728662 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nlzs\" (UniqueName: \"kubernetes.io/projected/32f41195-5638-45bb-8c7a-b245ff38763e-kube-api-access-5nlzs\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.728688 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.728703 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32f41195-5638-45bb-8c7a-b245ff38763e-logs\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.728707 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-config" (OuterVolumeSpecName: "config") pod "5617ce0e-e930-4990-958c-0851d3e4c9fd" (UID: "5617ce0e-e930-4990-958c-0851d3e4c9fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.728851 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.728916 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-config-data\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.729034 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-config\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.729071 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-scripts\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.729109 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.729150 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.729230 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nnbr\" (UniqueName: \"kubernetes.io/projected/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-kube-api-access-6nnbr\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.729256 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32f41195-5638-45bb-8c7a-b245ff38763e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.729294 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.729326 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.729440 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.729452 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqvv8\" (UniqueName: \"kubernetes.io/projected/5617ce0e-e930-4990-958c-0851d3e4c9fd-kube-api-access-kqvv8\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.729608 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.740206 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-config\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.740640 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5617ce0e-e930-4990-958c-0851d3e4c9fd" (UID: "5617ce0e-e930-4990-958c-0851d3e4c9fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.740653 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.744450 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.745132 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.751456 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5617ce0e-e930-4990-958c-0851d3e4c9fd" (UID: "5617ce0e-e930-4990-958c-0851d3e4c9fd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.755588 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5617ce0e-e930-4990-958c-0851d3e4c9fd" (UID: "5617ce0e-e930-4990-958c-0851d3e4c9fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.761348 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nnbr\" (UniqueName: \"kubernetes.io/projected/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-kube-api-access-6nnbr\") pod \"dnsmasq-dns-785d8bcb8c-6rgj6\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.791315 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5617ce0e-e930-4990-958c-0851d3e4c9fd" (UID: "5617ce0e-e930-4990-958c-0851d3e4c9fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.814749 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.816322 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.818497 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.819471 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.820206 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.829347 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.835411 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.835464 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-config-data\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.835509 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-scripts\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.835575 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32f41195-5638-45bb-8c7a-b245ff38763e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.835597 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.835645 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.835694 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nlzs\" (UniqueName: \"kubernetes.io/projected/32f41195-5638-45bb-8c7a-b245ff38763e-kube-api-access-5nlzs\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.835715 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32f41195-5638-45bb-8c7a-b245ff38763e-logs\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.835766 4835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.835954 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.835964 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.835972 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5617ce0e-e930-4990-958c-0851d3e4c9fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.839146 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32f41195-5638-45bb-8c7a-b245ff38763e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.842963 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32f41195-5638-45bb-8c7a-b245ff38763e-logs\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.849459 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.849957 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-config-data\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.850009 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-scripts\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.850757 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.850789 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/568e09178e07d31d4dfb537786fa6b565f218555c933df30dac8ee385e3f74f7/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.851733 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.868490 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nlzs\" (UniqueName: \"kubernetes.io/projected/32f41195-5638-45bb-8c7a-b245ff38763e-kube-api-access-5nlzs\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.906743 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") pod \"glance-default-external-api-0\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.924639 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-56m6p"] Feb 16 15:25:34 crc kubenswrapper[4835]: W0216 15:25:34.931729 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00c836f9_e279_4ced_bc58_0e55369710fa.slice/crio-21438c1f1d04d2de43e10876180ac31f493f23b2f0df55426fd1850bddf2b2bd WatchSource:0}: Error finding container 21438c1f1d04d2de43e10876180ac31f493f23b2f0df55426fd1850bddf2b2bd: Status 404 returned error can't find the container with id 21438c1f1d04d2de43e10876180ac31f493f23b2f0df55426fd1850bddf2b2bd Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.938108 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvc7g\" (UniqueName: \"kubernetes.io/projected/9be78845-c5b2-46cf-9f20-43486719fc07-kube-api-access-vvc7g\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.940021 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be78845-c5b2-46cf-9f20-43486719fc07-logs\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.940128 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.940177 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ec154127-1c50-4606-8308-de3489ef25c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.940239 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.940289 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.940350 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.940368 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9be78845-c5b2-46cf-9f20-43486719fc07-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:34 crc kubenswrapper[4835]: I0216 15:25:34.992031 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7zndp" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.000888 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-fpjxt"] Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.042349 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.042425 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.042445 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9be78845-c5b2-46cf-9f20-43486719fc07-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.042476 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvc7g\" (UniqueName: \"kubernetes.io/projected/9be78845-c5b2-46cf-9f20-43486719fc07-kube-api-access-vvc7g\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.042495 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be78845-c5b2-46cf-9f20-43486719fc07-logs\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.042618 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.042655 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ec154127-1c50-4606-8308-de3489ef25c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.042694 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.045180 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9be78845-c5b2-46cf-9f20-43486719fc07-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.048778 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.049024 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be78845-c5b2-46cf-9f20-43486719fc07-logs\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.049211 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.051029 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.051357 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.051403 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ec154127-1c50-4606-8308-de3489ef25c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ae9ad079290f3aff53310ed8b2991a4590ed9ba10dc75692266b91e47118349/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.057246 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.065519 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvc7g\" (UniqueName: \"kubernetes.io/projected/9be78845-c5b2-46cf-9f20-43486719fc07-kube-api-access-vvc7g\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.096288 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ec154127-1c50-4606-8308-de3489ef25c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") pod \"glance-default-internal-api-0\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.117968 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-bmq2c"] Feb 16 15:25:35 crc kubenswrapper[4835]: W0216 15:25:35.134655 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeeb4f111_43c6_46d3_aa98_82d93b71b723.slice/crio-9c2743b80cbf76c4bac5b8cb5f81f458743108a12728a38e3e9081d35955ec0a WatchSource:0}: Error finding container 9c2743b80cbf76c4bac5b8cb5f81f458743108a12728a38e3e9081d35955ec0a: Status 404 returned error can't find the container with id 9c2743b80cbf76c4bac5b8cb5f81f458743108a12728a38e3e9081d35955ec0a Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.167555 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.180067 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.273554 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lkz2b"] Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.277910 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" event={"ID":"5617ce0e-e930-4990-958c-0851d3e4c9fd","Type":"ContainerDied","Data":"9e2367077bf2c7a63110c25b4ee97b973fd69725f4e1c643ab2b8af4b75586fb"} Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.277989 4835 scope.go:117] "RemoveContainer" containerID="53406c7f61382c61fdb4ab44dda81ab8647f1b31955f5f277040f27e0ca3071e" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.277932 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-d8tfl" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.293264 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bmq2c" event={"ID":"eeb4f111-43c6-46d3-aa98-82d93b71b723","Type":"ContainerStarted","Data":"9c2743b80cbf76c4bac5b8cb5f81f458743108a12728a38e3e9081d35955ec0a"} Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.294051 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.297686 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" event={"ID":"48931b17-9b19-44ab-add0-d7ea5f823412","Type":"ContainerStarted","Data":"65b9a2e2d04f78be80a003681c5aa06abe51784f355fd78ada7d29cb023cabf4"} Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.299483 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-56m6p" event={"ID":"00c836f9-e279-4ced-bc58-0e55369710fa","Type":"ContainerStarted","Data":"9493edaf9cd7300d06cbd7664e06660377f42377d268488700ceec2cee24c7f8"} Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.299509 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-56m6p" event={"ID":"00c836f9-e279-4ced-bc58-0e55369710fa","Type":"ContainerStarted","Data":"21438c1f1d04d2de43e10876180ac31f493f23b2f0df55426fd1850bddf2b2bd"} Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.380659 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-56m6p" podStartSLOduration=2.380642934 podStartE2EDuration="2.380642934s" podCreationTimestamp="2026-02-16 15:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:25:35.321002908 +0000 UTC m=+1084.612995803" watchObservedRunningTime="2026-02-16 15:25:35.380642934 +0000 UTC m=+1084.672635829" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.395776 4835 scope.go:117] "RemoveContainer" containerID="7f9aa8d2f495cdc8f6185cf70a6f2804e33a023f998de7f861b9f955a818aa01" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.455266 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-d8tfl"] Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.455305 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-d8tfl"] Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.455319 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8fld2"] Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.477982 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-sgzmb"] Feb 16 15:25:35 crc kubenswrapper[4835]: E0216 15:25:35.635772 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:25:35 crc kubenswrapper[4835]: E0216 15:25:35.636233 4835 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:25:35 crc kubenswrapper[4835]: E0216 15:25:35.636376 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqdtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-sgzmb_openstack(3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:25:35 crc kubenswrapper[4835]: E0216 15:25:35.637435 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.681679 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-6rgj6"] Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.716901 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7zndp"] Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.838025 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.963421 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:25:35 crc kubenswrapper[4835]: I0216 15:25:35.990415 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:25:36 crc kubenswrapper[4835]: I0216 15:25:36.008305 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:25:36 crc kubenswrapper[4835]: I0216 15:25:36.108358 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:25:36 crc kubenswrapper[4835]: I0216 15:25:36.390856 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7zndp" event={"ID":"f8d68cbc-724d-490f-ae49-654aac2eb8ba","Type":"ContainerStarted","Data":"93f1fac06fa81dae59d65b25770a10d93b4007947b50684b24e4851ed25dfd78"} Feb 16 15:25:36 crc kubenswrapper[4835]: I0216 15:25:36.401042 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c","Type":"ContainerStarted","Data":"adf3ca7ef3b9f1fb779418fbc983c11c5e16217338acb7364d1ed7271a14476e"} Feb 16 15:25:36 crc kubenswrapper[4835]: I0216 15:25:36.406138 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9be78845-c5b2-46cf-9f20-43486719fc07","Type":"ContainerStarted","Data":"cf48ddbd0afdcb337f4ebebd76e726be82dac8148ee09de45931855e40318c8e"} Feb 16 15:25:36 crc kubenswrapper[4835]: I0216 15:25:36.423515 4835 generic.go:334] "Generic (PLEG): container finished" podID="48931b17-9b19-44ab-add0-d7ea5f823412" containerID="9551a0a7a8373c6c014620fde50f6ada31e457ce1e0bc6e9228f51066df9ec17" exitCode=0 Feb 16 15:25:36 crc kubenswrapper[4835]: I0216 15:25:36.423644 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" event={"ID":"48931b17-9b19-44ab-add0-d7ea5f823412","Type":"ContainerDied","Data":"9551a0a7a8373c6c014620fde50f6ada31e457ce1e0bc6e9228f51066df9ec17"} Feb 16 15:25:36 crc kubenswrapper[4835]: I0216 15:25:36.437246 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8fld2" event={"ID":"7ca5b397-cd76-4bc1-9552-caecb0f37375","Type":"ContainerStarted","Data":"2ff6738cb49b3123adb2e479c26b40b69d2739131e698d066969edd2294ef4b6"} Feb 16 15:25:36 crc kubenswrapper[4835]: I0216 15:25:36.438389 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-sgzmb" event={"ID":"3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1","Type":"ContainerStarted","Data":"eb63ecd24761464ae937267590772ee52ca1c4a1d6ddaadb9870d59813c85f20"} Feb 16 15:25:36 crc kubenswrapper[4835]: E0216 15:25:36.440287 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:25:36 crc kubenswrapper[4835]: I0216 15:25:36.449553 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32f41195-5638-45bb-8c7a-b245ff38763e","Type":"ContainerStarted","Data":"e606e24f7109a04290cf40b53f6ba1135596e8288321c43d676709863681758d"} Feb 16 15:25:36 crc kubenswrapper[4835]: I0216 15:25:36.456734 4835 generic.go:334] "Generic (PLEG): container finished" podID="d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf" containerID="8ff4ed229b49d4b210ee26873f67fde6e49916fda7c9ecb61f11dc1e3b059cc1" exitCode=0 Feb 16 15:25:36 crc kubenswrapper[4835]: I0216 15:25:36.456851 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" event={"ID":"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf","Type":"ContainerDied","Data":"8ff4ed229b49d4b210ee26873f67fde6e49916fda7c9ecb61f11dc1e3b059cc1"} Feb 16 15:25:36 crc kubenswrapper[4835]: I0216 15:25:36.456884 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" event={"ID":"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf","Type":"ContainerStarted","Data":"dac223a53c727c5dee259108b7ac7c8a8383a8589f9badfb9d3be254b33c0a16"} Feb 16 15:25:36 crc kubenswrapper[4835]: I0216 15:25:36.485948 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lkz2b" event={"ID":"7b6dd766-e3c2-4559-920f-b39e5fde5526","Type":"ContainerStarted","Data":"fdf765a0d652d2984d1dd6dd0862d2c85d158d3ee6ef9c9cea8567b474179b9e"} Feb 16 15:25:36 crc kubenswrapper[4835]: I0216 15:25:36.485993 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lkz2b" event={"ID":"7b6dd766-e3c2-4559-920f-b39e5fde5526","Type":"ContainerStarted","Data":"cc9b6fcbde1de16dca194f9eff593ee06e9b06542d6ca49928decb266b9e85f4"} Feb 16 15:25:36 crc kubenswrapper[4835]: I0216 15:25:36.527974 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-lkz2b" podStartSLOduration=3.527946771 podStartE2EDuration="3.527946771s" podCreationTimestamp="2026-02-16 15:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:25:36.519227595 +0000 UTC m=+1085.811220510" watchObservedRunningTime="2026-02-16 15:25:36.527946771 +0000 UTC m=+1085.819939676" Feb 16 15:25:36 crc kubenswrapper[4835]: I0216 15:25:36.969029 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.112931 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-config\") pod \"48931b17-9b19-44ab-add0-d7ea5f823412\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.113317 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-dns-svc\") pod \"48931b17-9b19-44ab-add0-d7ea5f823412\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.113379 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-dns-swift-storage-0\") pod \"48931b17-9b19-44ab-add0-d7ea5f823412\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.113494 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-ovsdbserver-nb\") pod \"48931b17-9b19-44ab-add0-d7ea5f823412\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.113563 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-ovsdbserver-sb\") pod \"48931b17-9b19-44ab-add0-d7ea5f823412\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.113604 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xzt8\" (UniqueName: \"kubernetes.io/projected/48931b17-9b19-44ab-add0-d7ea5f823412-kube-api-access-2xzt8\") pod \"48931b17-9b19-44ab-add0-d7ea5f823412\" (UID: \"48931b17-9b19-44ab-add0-d7ea5f823412\") " Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.127959 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48931b17-9b19-44ab-add0-d7ea5f823412-kube-api-access-2xzt8" (OuterVolumeSpecName: "kube-api-access-2xzt8") pod "48931b17-9b19-44ab-add0-d7ea5f823412" (UID: "48931b17-9b19-44ab-add0-d7ea5f823412"). InnerVolumeSpecName "kube-api-access-2xzt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.154169 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-config" (OuterVolumeSpecName: "config") pod "48931b17-9b19-44ab-add0-d7ea5f823412" (UID: "48931b17-9b19-44ab-add0-d7ea5f823412"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.170041 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "48931b17-9b19-44ab-add0-d7ea5f823412" (UID: "48931b17-9b19-44ab-add0-d7ea5f823412"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.172807 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "48931b17-9b19-44ab-add0-d7ea5f823412" (UID: "48931b17-9b19-44ab-add0-d7ea5f823412"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.175182 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "48931b17-9b19-44ab-add0-d7ea5f823412" (UID: "48931b17-9b19-44ab-add0-d7ea5f823412"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.188232 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "48931b17-9b19-44ab-add0-d7ea5f823412" (UID: "48931b17-9b19-44ab-add0-d7ea5f823412"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.216929 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xzt8\" (UniqueName: \"kubernetes.io/projected/48931b17-9b19-44ab-add0-d7ea5f823412-kube-api-access-2xzt8\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.217101 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.217183 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.217242 4835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.217303 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.217366 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48931b17-9b19-44ab-add0-d7ea5f823412-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.404658 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5617ce0e-e930-4990-958c-0851d3e4c9fd" path="/var/lib/kubelet/pods/5617ce0e-e930-4990-958c-0851d3e4c9fd/volumes" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.511154 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" event={"ID":"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf","Type":"ContainerStarted","Data":"5f3032dc334b4f2a7c9663ae337087702270daacfefa48a335810947e1ea1ec9"} Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.512440 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.516507 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9be78845-c5b2-46cf-9f20-43486719fc07","Type":"ContainerStarted","Data":"860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3"} Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.523470 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.523548 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-fpjxt" event={"ID":"48931b17-9b19-44ab-add0-d7ea5f823412","Type":"ContainerDied","Data":"65b9a2e2d04f78be80a003681c5aa06abe51784f355fd78ada7d29cb023cabf4"} Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.523628 4835 scope.go:117] "RemoveContainer" containerID="9551a0a7a8373c6c014620fde50f6ada31e457ce1e0bc6e9228f51066df9ec17" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.560923 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32f41195-5638-45bb-8c7a-b245ff38763e","Type":"ContainerStarted","Data":"43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8"} Feb 16 15:25:37 crc kubenswrapper[4835]: E0216 15:25:37.600580 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.652603 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" podStartSLOduration=3.652574659 podStartE2EDuration="3.652574659s" podCreationTimestamp="2026-02-16 15:25:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:25:37.55620002 +0000 UTC m=+1086.848192925" watchObservedRunningTime="2026-02-16 15:25:37.652574659 +0000 UTC m=+1086.944567554" Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.786674 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-fpjxt"] Feb 16 15:25:37 crc kubenswrapper[4835]: I0216 15:25:37.798903 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-fpjxt"] Feb 16 15:25:38 crc kubenswrapper[4835]: I0216 15:25:38.580574 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32f41195-5638-45bb-8c7a-b245ff38763e","Type":"ContainerStarted","Data":"e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d"} Feb 16 15:25:38 crc kubenswrapper[4835]: I0216 15:25:38.580727 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="32f41195-5638-45bb-8c7a-b245ff38763e" containerName="glance-log" containerID="cri-o://43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8" gracePeriod=30 Feb 16 15:25:38 crc kubenswrapper[4835]: I0216 15:25:38.581290 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="32f41195-5638-45bb-8c7a-b245ff38763e" containerName="glance-httpd" containerID="cri-o://e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d" gracePeriod=30 Feb 16 15:25:38 crc kubenswrapper[4835]: I0216 15:25:38.597152 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9be78845-c5b2-46cf-9f20-43486719fc07","Type":"ContainerStarted","Data":"69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912"} Feb 16 15:25:38 crc kubenswrapper[4835]: I0216 15:25:38.597211 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9be78845-c5b2-46cf-9f20-43486719fc07" containerName="glance-log" containerID="cri-o://860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3" gracePeriod=30 Feb 16 15:25:38 crc kubenswrapper[4835]: I0216 15:25:38.597266 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9be78845-c5b2-46cf-9f20-43486719fc07" containerName="glance-httpd" containerID="cri-o://69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912" gracePeriod=30 Feb 16 15:25:38 crc kubenswrapper[4835]: I0216 15:25:38.645800 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.645778891 podStartE2EDuration="5.645778891s" podCreationTimestamp="2026-02-16 15:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:25:38.644785175 +0000 UTC m=+1087.936778070" watchObservedRunningTime="2026-02-16 15:25:38.645778891 +0000 UTC m=+1087.937771786" Feb 16 15:25:38 crc kubenswrapper[4835]: I0216 15:25:38.652267 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.652248708 podStartE2EDuration="5.652248708s" podCreationTimestamp="2026-02-16 15:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:25:38.608701509 +0000 UTC m=+1087.900694414" watchObservedRunningTime="2026-02-16 15:25:38.652248708 +0000 UTC m=+1087.944241593" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.307518 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.399715 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48931b17-9b19-44ab-add0-d7ea5f823412" path="/var/lib/kubelet/pods/48931b17-9b19-44ab-add0-d7ea5f823412/volumes" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.466962 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.483791 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32f41195-5638-45bb-8c7a-b245ff38763e-logs\") pod \"32f41195-5638-45bb-8c7a-b245ff38763e\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.483870 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-scripts\") pod \"32f41195-5638-45bb-8c7a-b245ff38763e\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.484071 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") pod \"32f41195-5638-45bb-8c7a-b245ff38763e\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.484201 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-public-tls-certs\") pod \"32f41195-5638-45bb-8c7a-b245ff38763e\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.484287 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-combined-ca-bundle\") pod \"32f41195-5638-45bb-8c7a-b245ff38763e\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.484363 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32f41195-5638-45bb-8c7a-b245ff38763e-httpd-run\") pod \"32f41195-5638-45bb-8c7a-b245ff38763e\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.484485 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nlzs\" (UniqueName: \"kubernetes.io/projected/32f41195-5638-45bb-8c7a-b245ff38763e-kube-api-access-5nlzs\") pod \"32f41195-5638-45bb-8c7a-b245ff38763e\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.484563 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-config-data\") pod \"32f41195-5638-45bb-8c7a-b245ff38763e\" (UID: \"32f41195-5638-45bb-8c7a-b245ff38763e\") " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.486060 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32f41195-5638-45bb-8c7a-b245ff38763e-logs" (OuterVolumeSpecName: "logs") pod "32f41195-5638-45bb-8c7a-b245ff38763e" (UID: "32f41195-5638-45bb-8c7a-b245ff38763e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.486316 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32f41195-5638-45bb-8c7a-b245ff38763e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "32f41195-5638-45bb-8c7a-b245ff38763e" (UID: "32f41195-5638-45bb-8c7a-b245ff38763e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.490909 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32f41195-5638-45bb-8c7a-b245ff38763e-kube-api-access-5nlzs" (OuterVolumeSpecName: "kube-api-access-5nlzs") pod "32f41195-5638-45bb-8c7a-b245ff38763e" (UID: "32f41195-5638-45bb-8c7a-b245ff38763e"). InnerVolumeSpecName "kube-api-access-5nlzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.507607 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-scripts" (OuterVolumeSpecName: "scripts") pod "32f41195-5638-45bb-8c7a-b245ff38763e" (UID: "32f41195-5638-45bb-8c7a-b245ff38763e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.551548 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400" (OuterVolumeSpecName: "glance") pod "32f41195-5638-45bb-8c7a-b245ff38763e" (UID: "32f41195-5638-45bb-8c7a-b245ff38763e"). InnerVolumeSpecName "pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.563709 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32f41195-5638-45bb-8c7a-b245ff38763e" (UID: "32f41195-5638-45bb-8c7a-b245ff38763e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.586842 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be78845-c5b2-46cf-9f20-43486719fc07-logs\") pod \"9be78845-c5b2-46cf-9f20-43486719fc07\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.586912 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-scripts\") pod \"9be78845-c5b2-46cf-9f20-43486719fc07\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.586956 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvc7g\" (UniqueName: \"kubernetes.io/projected/9be78845-c5b2-46cf-9f20-43486719fc07-kube-api-access-vvc7g\") pod \"9be78845-c5b2-46cf-9f20-43486719fc07\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.586992 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9be78845-c5b2-46cf-9f20-43486719fc07-httpd-run\") pod \"9be78845-c5b2-46cf-9f20-43486719fc07\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.587079 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-internal-tls-certs\") pod \"9be78845-c5b2-46cf-9f20-43486719fc07\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.587259 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") pod \"9be78845-c5b2-46cf-9f20-43486719fc07\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.587307 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-config-data\") pod \"9be78845-c5b2-46cf-9f20-43486719fc07\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.587389 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-combined-ca-bundle\") pod \"9be78845-c5b2-46cf-9f20-43486719fc07\" (UID: \"9be78845-c5b2-46cf-9f20-43486719fc07\") " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.588057 4835 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32f41195-5638-45bb-8c7a-b245ff38763e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.588079 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nlzs\" (UniqueName: \"kubernetes.io/projected/32f41195-5638-45bb-8c7a-b245ff38763e-kube-api-access-5nlzs\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.588113 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32f41195-5638-45bb-8c7a-b245ff38763e-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.588124 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.588147 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") on node \"crc\" " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.588160 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.588617 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9be78845-c5b2-46cf-9f20-43486719fc07-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9be78845-c5b2-46cf-9f20-43486719fc07" (UID: "9be78845-c5b2-46cf-9f20-43486719fc07"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.588770 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9be78845-c5b2-46cf-9f20-43486719fc07-logs" (OuterVolumeSpecName: "logs") pod "9be78845-c5b2-46cf-9f20-43486719fc07" (UID: "9be78845-c5b2-46cf-9f20-43486719fc07"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.600678 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-scripts" (OuterVolumeSpecName: "scripts") pod "9be78845-c5b2-46cf-9f20-43486719fc07" (UID: "9be78845-c5b2-46cf-9f20-43486719fc07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.607584 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "32f41195-5638-45bb-8c7a-b245ff38763e" (UID: "32f41195-5638-45bb-8c7a-b245ff38763e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.615149 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be78845-c5b2-46cf-9f20-43486719fc07-kube-api-access-vvc7g" (OuterVolumeSpecName: "kube-api-access-vvc7g") pod "9be78845-c5b2-46cf-9f20-43486719fc07" (UID: "9be78845-c5b2-46cf-9f20-43486719fc07"). InnerVolumeSpecName "kube-api-access-vvc7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.634602 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-config-data" (OuterVolumeSpecName: "config-data") pod "32f41195-5638-45bb-8c7a-b245ff38763e" (UID: "32f41195-5638-45bb-8c7a-b245ff38763e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.638571 4835 generic.go:334] "Generic (PLEG): container finished" podID="32f41195-5638-45bb-8c7a-b245ff38763e" containerID="e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d" exitCode=143 Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.638912 4835 generic.go:334] "Generic (PLEG): container finished" podID="32f41195-5638-45bb-8c7a-b245ff38763e" containerID="43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8" exitCode=143 Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.639068 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32f41195-5638-45bb-8c7a-b245ff38763e","Type":"ContainerDied","Data":"e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d"} Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.639171 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32f41195-5638-45bb-8c7a-b245ff38763e","Type":"ContainerDied","Data":"43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8"} Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.639249 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32f41195-5638-45bb-8c7a-b245ff38763e","Type":"ContainerDied","Data":"e606e24f7109a04290cf40b53f6ba1135596e8288321c43d676709863681758d"} Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.639330 4835 scope.go:117] "RemoveContainer" containerID="e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.639655 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.672798 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1" (OuterVolumeSpecName: "glance") pod "9be78845-c5b2-46cf-9f20-43486719fc07" (UID: "9be78845-c5b2-46cf-9f20-43486719fc07"). InnerVolumeSpecName "pvc-ec154127-1c50-4606-8308-de3489ef25c1". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.673616 4835 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.673830 4835 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400") on node "crc" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.676791 4835 generic.go:334] "Generic (PLEG): container finished" podID="9be78845-c5b2-46cf-9f20-43486719fc07" containerID="69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912" exitCode=0 Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.676902 4835 generic.go:334] "Generic (PLEG): container finished" podID="9be78845-c5b2-46cf-9f20-43486719fc07" containerID="860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3" exitCode=143 Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.677030 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9be78845-c5b2-46cf-9f20-43486719fc07","Type":"ContainerDied","Data":"69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912"} Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.677112 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9be78845-c5b2-46cf-9f20-43486719fc07","Type":"ContainerDied","Data":"860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3"} Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.677128 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9be78845-c5b2-46cf-9f20-43486719fc07","Type":"ContainerDied","Data":"cf48ddbd0afdcb337f4ebebd76e726be82dac8148ee09de45931855e40318c8e"} Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.676900 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9be78845-c5b2-46cf-9f20-43486719fc07" (UID: "9be78845-c5b2-46cf-9f20-43486719fc07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.677626 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.689914 4835 reconciler_common.go:293] "Volume detached for volume \"pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.689948 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.689961 4835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.689971 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9be78845-c5b2-46cf-9f20-43486719fc07-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.689980 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.689989 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvc7g\" (UniqueName: \"kubernetes.io/projected/9be78845-c5b2-46cf-9f20-43486719fc07-kube-api-access-vvc7g\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.690001 4835 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9be78845-c5b2-46cf-9f20-43486719fc07-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.690010 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32f41195-5638-45bb-8c7a-b245ff38763e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.690035 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ec154127-1c50-4606-8308-de3489ef25c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") on node \"crc\" " Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.705849 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.728061 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.740001 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-config-data" (OuterVolumeSpecName: "config-data") pod "9be78845-c5b2-46cf-9f20-43486719fc07" (UID: "9be78845-c5b2-46cf-9f20-43486719fc07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.756470 4835 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.756933 4835 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ec154127-1c50-4606-8308-de3489ef25c1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1") on node "crc" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.761928 4835 scope.go:117] "RemoveContainer" containerID="43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.773677 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:25:39 crc kubenswrapper[4835]: E0216 15:25:39.774288 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be78845-c5b2-46cf-9f20-43486719fc07" containerName="glance-log" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.774303 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be78845-c5b2-46cf-9f20-43486719fc07" containerName="glance-log" Feb 16 15:25:39 crc kubenswrapper[4835]: E0216 15:25:39.774322 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48931b17-9b19-44ab-add0-d7ea5f823412" containerName="init" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.774328 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="48931b17-9b19-44ab-add0-d7ea5f823412" containerName="init" Feb 16 15:25:39 crc kubenswrapper[4835]: E0216 15:25:39.774342 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be78845-c5b2-46cf-9f20-43486719fc07" containerName="glance-httpd" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.774348 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be78845-c5b2-46cf-9f20-43486719fc07" containerName="glance-httpd" Feb 16 15:25:39 crc kubenswrapper[4835]: E0216 15:25:39.774360 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32f41195-5638-45bb-8c7a-b245ff38763e" containerName="glance-log" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.774366 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f41195-5638-45bb-8c7a-b245ff38763e" containerName="glance-log" Feb 16 15:25:39 crc kubenswrapper[4835]: E0216 15:25:39.774384 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32f41195-5638-45bb-8c7a-b245ff38763e" containerName="glance-httpd" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.774390 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f41195-5638-45bb-8c7a-b245ff38763e" containerName="glance-httpd" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.774596 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be78845-c5b2-46cf-9f20-43486719fc07" containerName="glance-httpd" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.774609 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="48931b17-9b19-44ab-add0-d7ea5f823412" containerName="init" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.774618 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="32f41195-5638-45bb-8c7a-b245ff38763e" containerName="glance-httpd" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.774628 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be78845-c5b2-46cf-9f20-43486719fc07" containerName="glance-log" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.774641 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="32f41195-5638-45bb-8c7a-b245ff38763e" containerName="glance-log" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.777242 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.785806 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.786276 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.792930 4835 reconciler_common.go:293] "Volume detached for volume \"pvc-ec154127-1c50-4606-8308-de3489ef25c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.793646 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.795310 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.807970 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9be78845-c5b2-46cf-9f20-43486719fc07" (UID: "9be78845-c5b2-46cf-9f20-43486719fc07"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.892688 4835 scope.go:117] "RemoveContainer" containerID="e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d" Feb 16 15:25:39 crc kubenswrapper[4835]: E0216 15:25:39.893252 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d\": container with ID starting with e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d not found: ID does not exist" containerID="e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.893300 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d"} err="failed to get container status \"e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d\": rpc error: code = NotFound desc = could not find container \"e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d\": container with ID starting with e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d not found: ID does not exist" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.893337 4835 scope.go:117] "RemoveContainer" containerID="43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8" Feb 16 15:25:39 crc kubenswrapper[4835]: E0216 15:25:39.898411 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8\": container with ID starting with 43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8 not found: ID does not exist" containerID="43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.898467 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8"} err="failed to get container status \"43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8\": rpc error: code = NotFound desc = could not find container \"43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8\": container with ID starting with 43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8 not found: ID does not exist" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.898505 4835 scope.go:117] "RemoveContainer" containerID="e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.899006 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d"} err="failed to get container status \"e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d\": rpc error: code = NotFound desc = could not find container \"e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d\": container with ID starting with e9718ebe8c51c5190563c9490217d45b48add169710a0119de5f3ba03e0ef97d not found: ID does not exist" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.899044 4835 scope.go:117] "RemoveContainer" containerID="43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.899441 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8"} err="failed to get container status \"43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8\": rpc error: code = NotFound desc = could not find container \"43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8\": container with ID starting with 43625bb07417af921fb44de5ad1f2264d6785037825be5987775c6e677724ec8 not found: ID does not exist" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.899462 4835 scope.go:117] "RemoveContainer" containerID="69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.899666 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.899825 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c382add-fd25-4394-a43a-b4992607986b-logs\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.899874 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.899919 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.899979 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c382add-fd25-4394-a43a-b4992607986b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.900009 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.900040 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.900084 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfbjf\" (UniqueName: \"kubernetes.io/projected/8c382add-fd25-4394-a43a-b4992607986b-kube-api-access-tfbjf\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.900451 4835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9be78845-c5b2-46cf-9f20-43486719fc07-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:39 crc kubenswrapper[4835]: I0216 15:25:39.951737 4835 scope.go:117] "RemoveContainer" containerID="860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.002403 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.002555 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c382add-fd25-4394-a43a-b4992607986b-logs\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.002586 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.002615 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.002657 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c382add-fd25-4394-a43a-b4992607986b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.002675 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.002696 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.002718 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfbjf\" (UniqueName: \"kubernetes.io/projected/8c382add-fd25-4394-a43a-b4992607986b-kube-api-access-tfbjf\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.003502 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c382add-fd25-4394-a43a-b4992607986b-logs\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.004812 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c382add-fd25-4394-a43a-b4992607986b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.006924 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.009700 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.010235 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.010708 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.022449 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfbjf\" (UniqueName: \"kubernetes.io/projected/8c382add-fd25-4394-a43a-b4992607986b-kube-api-access-tfbjf\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.027362 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.027402 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/568e09178e07d31d4dfb537786fa6b565f218555c933df30dac8ee385e3f74f7/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.116596 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") pod \"glance-default-external-api-0\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.130085 4835 scope.go:117] "RemoveContainer" containerID="69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912" Feb 16 15:25:40 crc kubenswrapper[4835]: E0216 15:25:40.130570 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912\": container with ID starting with 69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912 not found: ID does not exist" containerID="69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.130613 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912"} err="failed to get container status \"69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912\": rpc error: code = NotFound desc = could not find container \"69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912\": container with ID starting with 69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912 not found: ID does not exist" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.130639 4835 scope.go:117] "RemoveContainer" containerID="860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3" Feb 16 15:25:40 crc kubenswrapper[4835]: E0216 15:25:40.130906 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3\": container with ID starting with 860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3 not found: ID does not exist" containerID="860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.130944 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3"} err="failed to get container status \"860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3\": rpc error: code = NotFound desc = could not find container \"860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3\": container with ID starting with 860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3 not found: ID does not exist" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.130962 4835 scope.go:117] "RemoveContainer" containerID="69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.131176 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912"} err="failed to get container status \"69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912\": rpc error: code = NotFound desc = could not find container \"69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912\": container with ID starting with 69eefe6cbfd74916f370cffaa1d4f0604e1462fa64e850ca62a40bb8482de912 not found: ID does not exist" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.131199 4835 scope.go:117] "RemoveContainer" containerID="860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.131451 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3"} err="failed to get container status \"860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3\": rpc error: code = NotFound desc = could not find container \"860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3\": container with ID starting with 860c9854dd90e24db0401d5690ac4b13c940e515b1a6f4715aa806b7d1639ca3 not found: ID does not exist" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.146387 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.177549 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.177601 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.179038 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.180162 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.180673 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.190366 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.191349 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.323370 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-config-data\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.323508 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ec154127-1c50-4606-8308-de3489ef25c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.323546 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.323722 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.323903 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-scripts\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.323970 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/090f6dde-5b4b-4154-8123-6e4ba3d0e295-logs\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.324109 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dhh2\" (UniqueName: \"kubernetes.io/projected/090f6dde-5b4b-4154-8123-6e4ba3d0e295-kube-api-access-2dhh2\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.324158 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/090f6dde-5b4b-4154-8123-6e4ba3d0e295-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.425984 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-scripts\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.426352 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/090f6dde-5b4b-4154-8123-6e4ba3d0e295-logs\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.426422 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dhh2\" (UniqueName: \"kubernetes.io/projected/090f6dde-5b4b-4154-8123-6e4ba3d0e295-kube-api-access-2dhh2\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.426452 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/090f6dde-5b4b-4154-8123-6e4ba3d0e295-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.426594 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-config-data\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.426666 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.426695 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ec154127-1c50-4606-8308-de3489ef25c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.426743 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.426856 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/090f6dde-5b4b-4154-8123-6e4ba3d0e295-logs\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.427399 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/090f6dde-5b4b-4154-8123-6e4ba3d0e295-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.433947 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.434022 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.434094 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-scripts\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.434798 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-config-data\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.441862 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.441904 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ec154127-1c50-4606-8308-de3489ef25c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ae9ad079290f3aff53310ed8b2991a4590ed9ba10dc75692266b91e47118349/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.443107 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dhh2\" (UniqueName: \"kubernetes.io/projected/090f6dde-5b4b-4154-8123-6e4ba3d0e295-kube-api-access-2dhh2\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.487514 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ec154127-1c50-4606-8308-de3489ef25c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") pod \"glance-default-internal-api-0\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.516571 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.702590 4835 generic.go:334] "Generic (PLEG): container finished" podID="00c836f9-e279-4ced-bc58-0e55369710fa" containerID="9493edaf9cd7300d06cbd7664e06660377f42377d268488700ceec2cee24c7f8" exitCode=0 Feb 16 15:25:40 crc kubenswrapper[4835]: I0216 15:25:40.702659 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-56m6p" event={"ID":"00c836f9-e279-4ced-bc58-0e55369710fa","Type":"ContainerDied","Data":"9493edaf9cd7300d06cbd7664e06660377f42377d268488700ceec2cee24c7f8"} Feb 16 15:25:41 crc kubenswrapper[4835]: I0216 15:25:41.390874 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32f41195-5638-45bb-8c7a-b245ff38763e" path="/var/lib/kubelet/pods/32f41195-5638-45bb-8c7a-b245ff38763e/volumes" Feb 16 15:25:41 crc kubenswrapper[4835]: I0216 15:25:41.391694 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9be78845-c5b2-46cf-9f20-43486719fc07" path="/var/lib/kubelet/pods/9be78845-c5b2-46cf-9f20-43486719fc07/volumes" Feb 16 15:25:44 crc kubenswrapper[4835]: I0216 15:25:44.821705 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:25:44 crc kubenswrapper[4835]: I0216 15:25:44.892238 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-vfkhr"] Feb 16 15:25:44 crc kubenswrapper[4835]: I0216 15:25:44.892507 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-vfkhr" podUID="515e8879-485e-4be3-9fb9-896feb6b2d6e" containerName="dnsmasq-dns" containerID="cri-o://7b55d02f3ffc4e1ca70e5a68825e2b80e2c7602f11225ba8e35995d419189264" gracePeriod=10 Feb 16 15:25:45 crc kubenswrapper[4835]: I0216 15:25:45.765673 4835 generic.go:334] "Generic (PLEG): container finished" podID="515e8879-485e-4be3-9fb9-896feb6b2d6e" containerID="7b55d02f3ffc4e1ca70e5a68825e2b80e2c7602f11225ba8e35995d419189264" exitCode=0 Feb 16 15:25:45 crc kubenswrapper[4835]: I0216 15:25:45.765876 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-vfkhr" event={"ID":"515e8879-485e-4be3-9fb9-896feb6b2d6e","Type":"ContainerDied","Data":"7b55d02f3ffc4e1ca70e5a68825e2b80e2c7602f11225ba8e35995d419189264"} Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.198580 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.294962 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2tcw\" (UniqueName: \"kubernetes.io/projected/00c836f9-e279-4ced-bc58-0e55369710fa-kube-api-access-j2tcw\") pod \"00c836f9-e279-4ced-bc58-0e55369710fa\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.295052 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-fernet-keys\") pod \"00c836f9-e279-4ced-bc58-0e55369710fa\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.295078 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-scripts\") pod \"00c836f9-e279-4ced-bc58-0e55369710fa\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.295102 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-config-data\") pod \"00c836f9-e279-4ced-bc58-0e55369710fa\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.295180 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-credential-keys\") pod \"00c836f9-e279-4ced-bc58-0e55369710fa\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.295224 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-combined-ca-bundle\") pod \"00c836f9-e279-4ced-bc58-0e55369710fa\" (UID: \"00c836f9-e279-4ced-bc58-0e55369710fa\") " Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.330009 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "00c836f9-e279-4ced-bc58-0e55369710fa" (UID: "00c836f9-e279-4ced-bc58-0e55369710fa"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.330079 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c836f9-e279-4ced-bc58-0e55369710fa-kube-api-access-j2tcw" (OuterVolumeSpecName: "kube-api-access-j2tcw") pod "00c836f9-e279-4ced-bc58-0e55369710fa" (UID: "00c836f9-e279-4ced-bc58-0e55369710fa"). InnerVolumeSpecName "kube-api-access-j2tcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.330549 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "00c836f9-e279-4ced-bc58-0e55369710fa" (UID: "00c836f9-e279-4ced-bc58-0e55369710fa"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.336240 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-scripts" (OuterVolumeSpecName: "scripts") pod "00c836f9-e279-4ced-bc58-0e55369710fa" (UID: "00c836f9-e279-4ced-bc58-0e55369710fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.339154 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00c836f9-e279-4ced-bc58-0e55369710fa" (UID: "00c836f9-e279-4ced-bc58-0e55369710fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.341694 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-config-data" (OuterVolumeSpecName: "config-data") pod "00c836f9-e279-4ced-bc58-0e55369710fa" (UID: "00c836f9-e279-4ced-bc58-0e55369710fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.398062 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2tcw\" (UniqueName: \"kubernetes.io/projected/00c836f9-e279-4ced-bc58-0e55369710fa-kube-api-access-j2tcw\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.398097 4835 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.398108 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.398117 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.398125 4835 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.398134 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c836f9-e279-4ced-bc58-0e55369710fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.782494 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-56m6p" event={"ID":"00c836f9-e279-4ced-bc58-0e55369710fa","Type":"ContainerDied","Data":"21438c1f1d04d2de43e10876180ac31f493f23b2f0df55426fd1850bddf2b2bd"} Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.782544 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21438c1f1d04d2de43e10876180ac31f493f23b2f0df55426fd1850bddf2b2bd" Feb 16 15:25:47 crc kubenswrapper[4835]: I0216 15:25:47.782715 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-56m6p" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.279630 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-56m6p"] Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.288304 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-56m6p"] Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.380044 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2pdkm"] Feb 16 15:25:48 crc kubenswrapper[4835]: E0216 15:25:48.380421 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c836f9-e279-4ced-bc58-0e55369710fa" containerName="keystone-bootstrap" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.380437 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c836f9-e279-4ced-bc58-0e55369710fa" containerName="keystone-bootstrap" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.380671 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="00c836f9-e279-4ced-bc58-0e55369710fa" containerName="keystone-bootstrap" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.381569 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.384065 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.384615 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.385010 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.384180 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-78dkx" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.445172 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2pdkm"] Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.534724 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-combined-ca-bundle\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.534779 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-scripts\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.534837 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-credential-keys\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.534913 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-fernet-keys\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.534932 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwb8g\" (UniqueName: \"kubernetes.io/projected/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-kube-api-access-wwb8g\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.534977 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-config-data\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.586468 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.586544 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.636887 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-credential-keys\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.636984 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-fernet-keys\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.637015 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwb8g\" (UniqueName: \"kubernetes.io/projected/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-kube-api-access-wwb8g\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.637059 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-config-data\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.637146 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-combined-ca-bundle\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.637186 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-scripts\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.643085 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-scripts\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.643329 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-fernet-keys\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.644360 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-credential-keys\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.644871 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-config-data\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.646059 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-combined-ca-bundle\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.657998 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwb8g\" (UniqueName: \"kubernetes.io/projected/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-kube-api-access-wwb8g\") pod \"keystone-bootstrap-2pdkm\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:48 crc kubenswrapper[4835]: I0216 15:25:48.745610 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:25:49 crc kubenswrapper[4835]: I0216 15:25:49.391061 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00c836f9-e279-4ced-bc58-0e55369710fa" path="/var/lib/kubelet/pods/00c836f9-e279-4ced-bc58-0e55369710fa/volumes" Feb 16 15:25:52 crc kubenswrapper[4835]: I0216 15:25:52.824990 4835 generic.go:334] "Generic (PLEG): container finished" podID="7b6dd766-e3c2-4559-920f-b39e5fde5526" containerID="fdf765a0d652d2984d1dd6dd0862d2c85d158d3ee6ef9c9cea8567b474179b9e" exitCode=0 Feb 16 15:25:52 crc kubenswrapper[4835]: I0216 15:25:52.825066 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lkz2b" event={"ID":"7b6dd766-e3c2-4559-920f-b39e5fde5526","Type":"ContainerDied","Data":"fdf765a0d652d2984d1dd6dd0862d2c85d158d3ee6ef9c9cea8567b474179b9e"} Feb 16 15:25:53 crc kubenswrapper[4835]: I0216 15:25:53.145166 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-vfkhr" podUID="515e8879-485e-4be3-9fb9-896feb6b2d6e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Feb 16 15:25:53 crc kubenswrapper[4835]: E0216 15:25:53.510053 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:25:53 crc kubenswrapper[4835]: E0216 15:25:53.510103 4835 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:25:53 crc kubenswrapper[4835]: E0216 15:25:53.510219 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqdtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-sgzmb_openstack(3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:25:53 crc kubenswrapper[4835]: E0216 15:25:53.511872 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:25:54 crc kubenswrapper[4835]: E0216 15:25:54.316078 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 16 15:25:54 crc kubenswrapper[4835]: E0216 15:25:54.316604 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dz4nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-7zndp_openstack(f8d68cbc-724d-490f-ae49-654aac2eb8ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:25:54 crc kubenswrapper[4835]: E0216 15:25:54.323727 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-7zndp" podUID="f8d68cbc-724d-490f-ae49-654aac2eb8ba" Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.400969 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.545017 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqg6z\" (UniqueName: \"kubernetes.io/projected/515e8879-485e-4be3-9fb9-896feb6b2d6e-kube-api-access-cqg6z\") pod \"515e8879-485e-4be3-9fb9-896feb6b2d6e\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.545113 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-ovsdbserver-nb\") pod \"515e8879-485e-4be3-9fb9-896feb6b2d6e\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.545198 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-config\") pod \"515e8879-485e-4be3-9fb9-896feb6b2d6e\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.545274 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-dns-svc\") pod \"515e8879-485e-4be3-9fb9-896feb6b2d6e\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.545342 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-ovsdbserver-sb\") pod \"515e8879-485e-4be3-9fb9-896feb6b2d6e\" (UID: \"515e8879-485e-4be3-9fb9-896feb6b2d6e\") " Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.549141 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/515e8879-485e-4be3-9fb9-896feb6b2d6e-kube-api-access-cqg6z" (OuterVolumeSpecName: "kube-api-access-cqg6z") pod "515e8879-485e-4be3-9fb9-896feb6b2d6e" (UID: "515e8879-485e-4be3-9fb9-896feb6b2d6e"). InnerVolumeSpecName "kube-api-access-cqg6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.586699 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-config" (OuterVolumeSpecName: "config") pod "515e8879-485e-4be3-9fb9-896feb6b2d6e" (UID: "515e8879-485e-4be3-9fb9-896feb6b2d6e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.586904 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "515e8879-485e-4be3-9fb9-896feb6b2d6e" (UID: "515e8879-485e-4be3-9fb9-896feb6b2d6e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.591728 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "515e8879-485e-4be3-9fb9-896feb6b2d6e" (UID: "515e8879-485e-4be3-9fb9-896feb6b2d6e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.592695 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "515e8879-485e-4be3-9fb9-896feb6b2d6e" (UID: "515e8879-485e-4be3-9fb9-896feb6b2d6e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.647786 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.647840 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.647849 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.647857 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/515e8879-485e-4be3-9fb9-896feb6b2d6e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.647865 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqg6z\" (UniqueName: \"kubernetes.io/projected/515e8879-485e-4be3-9fb9-896feb6b2d6e-kube-api-access-cqg6z\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.843889 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-vfkhr" Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.843882 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-vfkhr" event={"ID":"515e8879-485e-4be3-9fb9-896feb6b2d6e","Type":"ContainerDied","Data":"a61fe65810747c7a3708e6134144ff02658ce2b5021e8e1a5a39ba4cad6cd757"} Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.844081 4835 scope.go:117] "RemoveContainer" containerID="7b55d02f3ffc4e1ca70e5a68825e2b80e2c7602f11225ba8e35995d419189264" Feb 16 15:25:54 crc kubenswrapper[4835]: E0216 15:25:54.846129 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-7zndp" podUID="f8d68cbc-724d-490f-ae49-654aac2eb8ba" Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.894663 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-vfkhr"] Feb 16 15:25:54 crc kubenswrapper[4835]: I0216 15:25:54.902005 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-vfkhr"] Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.394014 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="515e8879-485e-4be3-9fb9-896feb6b2d6e" path="/var/lib/kubelet/pods/515e8879-485e-4be3-9fb9-896feb6b2d6e/volumes" Feb 16 15:25:55 crc kubenswrapper[4835]: E0216 15:25:55.481326 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 16 15:25:55 crc kubenswrapper[4835]: E0216 15:25:55.481584 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhs8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-bmq2c_openstack(eeb4f111-43c6-46d3-aa98-82d93b71b723): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:25:55 crc kubenswrapper[4835]: E0216 15:25:55.482755 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-bmq2c" podUID="eeb4f111-43c6-46d3-aa98-82d93b71b723" Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.486259 4835 scope.go:117] "RemoveContainer" containerID="5eecb0a7dbd8c34ac2b5552bb09b5435e07f4e51ab6c405f6433be0417f26073" Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.696719 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lkz2b" Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.772433 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6dd766-e3c2-4559-920f-b39e5fde5526-combined-ca-bundle\") pod \"7b6dd766-e3c2-4559-920f-b39e5fde5526\" (UID: \"7b6dd766-e3c2-4559-920f-b39e5fde5526\") " Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.772607 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b6dd766-e3c2-4559-920f-b39e5fde5526-config\") pod \"7b6dd766-e3c2-4559-920f-b39e5fde5526\" (UID: \"7b6dd766-e3c2-4559-920f-b39e5fde5526\") " Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.772708 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx4gl\" (UniqueName: \"kubernetes.io/projected/7b6dd766-e3c2-4559-920f-b39e5fde5526-kube-api-access-vx4gl\") pod \"7b6dd766-e3c2-4559-920f-b39e5fde5526\" (UID: \"7b6dd766-e3c2-4559-920f-b39e5fde5526\") " Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.779416 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b6dd766-e3c2-4559-920f-b39e5fde5526-kube-api-access-vx4gl" (OuterVolumeSpecName: "kube-api-access-vx4gl") pod "7b6dd766-e3c2-4559-920f-b39e5fde5526" (UID: "7b6dd766-e3c2-4559-920f-b39e5fde5526"). InnerVolumeSpecName "kube-api-access-vx4gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.800690 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6dd766-e3c2-4559-920f-b39e5fde5526-config" (OuterVolumeSpecName: "config") pod "7b6dd766-e3c2-4559-920f-b39e5fde5526" (UID: "7b6dd766-e3c2-4559-920f-b39e5fde5526"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.815105 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6dd766-e3c2-4559-920f-b39e5fde5526-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b6dd766-e3c2-4559-920f-b39e5fde5526" (UID: "7b6dd766-e3c2-4559-920f-b39e5fde5526"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.854694 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lkz2b" Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.854691 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lkz2b" event={"ID":"7b6dd766-e3c2-4559-920f-b39e5fde5526","Type":"ContainerDied","Data":"cc9b6fcbde1de16dca194f9eff593ee06e9b06542d6ca49928decb266b9e85f4"} Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.854850 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc9b6fcbde1de16dca194f9eff593ee06e9b06542d6ca49928decb266b9e85f4" Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.856555 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c","Type":"ContainerStarted","Data":"cf2554531880261847bbbeb289e4e0c19db7f8da63dc773c2e5dc287f04343c9"} Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.858284 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8fld2" event={"ID":"7ca5b397-cd76-4bc1-9552-caecb0f37375","Type":"ContainerStarted","Data":"7b104e3ee56b28e6b18dd0aa24c1253ab3520c201cfeaf4c47e730f6b37bdd03"} Feb 16 15:25:55 crc kubenswrapper[4835]: E0216 15:25:55.860722 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-bmq2c" podUID="eeb4f111-43c6-46d3-aa98-82d93b71b723" Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.875457 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b6dd766-e3c2-4559-920f-b39e5fde5526-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.875483 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx4gl\" (UniqueName: \"kubernetes.io/projected/7b6dd766-e3c2-4559-920f-b39e5fde5526-kube-api-access-vx4gl\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.875496 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6dd766-e3c2-4559-920f-b39e5fde5526-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.885237 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8fld2" podStartSLOduration=4.074712473 podStartE2EDuration="22.885218434s" podCreationTimestamp="2026-02-16 15:25:33 +0000 UTC" firstStartedPulling="2026-02-16 15:25:35.499163497 +0000 UTC m=+1084.791156392" lastFinishedPulling="2026-02-16 15:25:54.309669468 +0000 UTC m=+1103.601662353" observedRunningTime="2026-02-16 15:25:55.881765934 +0000 UTC m=+1105.173758849" watchObservedRunningTime="2026-02-16 15:25:55.885218434 +0000 UTC m=+1105.177211339" Feb 16 15:25:55 crc kubenswrapper[4835]: W0216 15:25:55.965311 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c382add_fd25_4394_a43a_b4992607986b.slice/crio-8ebb5564b87c270ceef50396d700c3ad4cbef942b1592322421f0c756b0ea606 WatchSource:0}: Error finding container 8ebb5564b87c270ceef50396d700c3ad4cbef942b1592322421f0c756b0ea606: Status 404 returned error can't find the container with id 8ebb5564b87c270ceef50396d700c3ad4cbef942b1592322421f0c756b0ea606 Feb 16 15:25:55 crc kubenswrapper[4835]: I0216 15:25:55.967346 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:25:56 crc kubenswrapper[4835]: I0216 15:25:56.004403 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2pdkm"] Feb 16 15:25:56 crc kubenswrapper[4835]: W0216 15:25:56.008669 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c74ec26_1a6a_4e4d_be0f_b53bfd4864e3.slice/crio-49cf9b021fbffdbb83e0a130065eb52eea2d72ecf2dcc1eb170e733b8c812343 WatchSource:0}: Error finding container 49cf9b021fbffdbb83e0a130065eb52eea2d72ecf2dcc1eb170e733b8c812343: Status 404 returned error can't find the container with id 49cf9b021fbffdbb83e0a130065eb52eea2d72ecf2dcc1eb170e733b8c812343 Feb 16 15:25:56 crc kubenswrapper[4835]: W0216 15:25:56.078545 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod090f6dde_5b4b_4154_8123_6e4ba3d0e295.slice/crio-8c16bea08517f3b5c110b5f28cd37921f8b55318249deb25a4212e9a72f23375 WatchSource:0}: Error finding container 8c16bea08517f3b5c110b5f28cd37921f8b55318249deb25a4212e9a72f23375: Status 404 returned error can't find the container with id 8c16bea08517f3b5c110b5f28cd37921f8b55318249deb25a4212e9a72f23375 Feb 16 15:25:56 crc kubenswrapper[4835]: I0216 15:25:56.078777 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:25:56 crc kubenswrapper[4835]: I0216 15:25:56.942359 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nzvcg"] Feb 16 15:25:56 crc kubenswrapper[4835]: E0216 15:25:56.944006 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515e8879-485e-4be3-9fb9-896feb6b2d6e" containerName="init" Feb 16 15:25:56 crc kubenswrapper[4835]: I0216 15:25:56.944075 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="515e8879-485e-4be3-9fb9-896feb6b2d6e" containerName="init" Feb 16 15:25:56 crc kubenswrapper[4835]: E0216 15:25:56.944142 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515e8879-485e-4be3-9fb9-896feb6b2d6e" containerName="dnsmasq-dns" Feb 16 15:25:56 crc kubenswrapper[4835]: I0216 15:25:56.944200 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="515e8879-485e-4be3-9fb9-896feb6b2d6e" containerName="dnsmasq-dns" Feb 16 15:25:56 crc kubenswrapper[4835]: E0216 15:25:56.944251 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6dd766-e3c2-4559-920f-b39e5fde5526" containerName="neutron-db-sync" Feb 16 15:25:56 crc kubenswrapper[4835]: I0216 15:25:56.944302 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6dd766-e3c2-4559-920f-b39e5fde5526" containerName="neutron-db-sync" Feb 16 15:25:56 crc kubenswrapper[4835]: I0216 15:25:56.944544 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="515e8879-485e-4be3-9fb9-896feb6b2d6e" containerName="dnsmasq-dns" Feb 16 15:25:56 crc kubenswrapper[4835]: I0216 15:25:56.944619 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b6dd766-e3c2-4559-920f-b39e5fde5526" containerName="neutron-db-sync" Feb 16 15:25:56 crc kubenswrapper[4835]: I0216 15:25:56.945668 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:56 crc kubenswrapper[4835]: I0216 15:25:56.950306 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nzvcg"] Feb 16 15:25:56 crc kubenswrapper[4835]: I0216 15:25:56.961060 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c382add-fd25-4394-a43a-b4992607986b","Type":"ContainerStarted","Data":"09eb3c91b9f2c05cfd9345b6506600721f584fdf65ae03217acae2ed909637a4"} Feb 16 15:25:56 crc kubenswrapper[4835]: I0216 15:25:56.961101 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c382add-fd25-4394-a43a-b4992607986b","Type":"ContainerStarted","Data":"8ebb5564b87c270ceef50396d700c3ad4cbef942b1592322421f0c756b0ea606"} Feb 16 15:25:56 crc kubenswrapper[4835]: I0216 15:25:56.971484 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"090f6dde-5b4b-4154-8123-6e4ba3d0e295","Type":"ContainerStarted","Data":"0f57339025b6b66ed34b035c9d8e04adda9f38a5c30eabde613fb2f3facca854"} Feb 16 15:25:56 crc kubenswrapper[4835]: I0216 15:25:56.971551 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"090f6dde-5b4b-4154-8123-6e4ba3d0e295","Type":"ContainerStarted","Data":"8c16bea08517f3b5c110b5f28cd37921f8b55318249deb25a4212e9a72f23375"} Feb 16 15:25:56 crc kubenswrapper[4835]: I0216 15:25:56.993577 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2pdkm" event={"ID":"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3","Type":"ContainerStarted","Data":"2e64983c2028c67df9406fd037236b4fe861334b06649e63b31b9656798f1ed1"} Feb 16 15:25:56 crc kubenswrapper[4835]: I0216 15:25:56.993876 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2pdkm" event={"ID":"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3","Type":"ContainerStarted","Data":"49cf9b021fbffdbb83e0a130065eb52eea2d72ecf2dcc1eb170e733b8c812343"} Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:56.999986 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.000038 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-dns-svc\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.000070 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.000415 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-config\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.000487 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s5cs\" (UniqueName: \"kubernetes.io/projected/48036321-6092-4b5b-9467-af594e089508-kube-api-access-4s5cs\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.000596 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.048555 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2pdkm" podStartSLOduration=9.04853799 podStartE2EDuration="9.04853799s" podCreationTimestamp="2026-02-16 15:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:25:57.020006007 +0000 UTC m=+1106.311998892" watchObservedRunningTime="2026-02-16 15:25:57.04853799 +0000 UTC m=+1106.340530885" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.101587 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-config\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.101767 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s5cs\" (UniqueName: \"kubernetes.io/projected/48036321-6092-4b5b-9467-af594e089508-kube-api-access-4s5cs\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.101897 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.101954 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.101985 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-dns-svc\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.102020 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.102425 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-config\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.103089 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.103224 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.103380 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-dns-svc\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.103991 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.115610 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5d7bb8ff7b-4thhg"] Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.118046 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.121156 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.121320 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.121429 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.121615 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-tbkzn" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.123322 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s5cs\" (UniqueName: \"kubernetes.io/projected/48036321-6092-4b5b-9467-af594e089508-kube-api-access-4s5cs\") pod \"dnsmasq-dns-55f844cf75-nzvcg\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.129293 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d7bb8ff7b-4thhg"] Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.203761 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-config\") pod \"neutron-5d7bb8ff7b-4thhg\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.203999 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-combined-ca-bundle\") pod \"neutron-5d7bb8ff7b-4thhg\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.204083 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fbqp\" (UniqueName: \"kubernetes.io/projected/a64f5816-6cc6-4640-b28b-0f0cfb175d28-kube-api-access-4fbqp\") pod \"neutron-5d7bb8ff7b-4thhg\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.204175 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-ovndb-tls-certs\") pod \"neutron-5d7bb8ff7b-4thhg\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.204587 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-httpd-config\") pod \"neutron-5d7bb8ff7b-4thhg\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.307056 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-httpd-config\") pod \"neutron-5d7bb8ff7b-4thhg\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.307434 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-config\") pod \"neutron-5d7bb8ff7b-4thhg\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.307677 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-combined-ca-bundle\") pod \"neutron-5d7bb8ff7b-4thhg\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.307707 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fbqp\" (UniqueName: \"kubernetes.io/projected/a64f5816-6cc6-4640-b28b-0f0cfb175d28-kube-api-access-4fbqp\") pod \"neutron-5d7bb8ff7b-4thhg\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.307745 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-ovndb-tls-certs\") pod \"neutron-5d7bb8ff7b-4thhg\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.314504 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.316345 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-ovndb-tls-certs\") pod \"neutron-5d7bb8ff7b-4thhg\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.318303 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-httpd-config\") pod \"neutron-5d7bb8ff7b-4thhg\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.322069 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-config\") pod \"neutron-5d7bb8ff7b-4thhg\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.325111 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fbqp\" (UniqueName: \"kubernetes.io/projected/a64f5816-6cc6-4640-b28b-0f0cfb175d28-kube-api-access-4fbqp\") pod \"neutron-5d7bb8ff7b-4thhg\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.326249 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-combined-ca-bundle\") pod \"neutron-5d7bb8ff7b-4thhg\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:25:57 crc kubenswrapper[4835]: I0216 15:25:57.453064 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:25:58 crc kubenswrapper[4835]: I0216 15:25:58.019131 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c","Type":"ContainerStarted","Data":"9318fe45fa94086436b9c5962775be17396029603e277fad7f493f3d8f65ca84"} Feb 16 15:25:58 crc kubenswrapper[4835]: I0216 15:25:58.025052 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c382add-fd25-4394-a43a-b4992607986b","Type":"ContainerStarted","Data":"2bbf685772ba686e58a400b5e59844fd03d345772e79c6e5018b55a3734fc7fe"} Feb 16 15:25:58 crc kubenswrapper[4835]: I0216 15:25:58.029832 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nzvcg"] Feb 16 15:25:58 crc kubenswrapper[4835]: I0216 15:25:58.053573 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=19.053555254 podStartE2EDuration="19.053555254s" podCreationTimestamp="2026-02-16 15:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:25:58.050611318 +0000 UTC m=+1107.342604213" watchObservedRunningTime="2026-02-16 15:25:58.053555254 +0000 UTC m=+1107.345548149" Feb 16 15:25:58 crc kubenswrapper[4835]: I0216 15:25:58.068187 4835 generic.go:334] "Generic (PLEG): container finished" podID="7ca5b397-cd76-4bc1-9552-caecb0f37375" containerID="7b104e3ee56b28e6b18dd0aa24c1253ab3520c201cfeaf4c47e730f6b37bdd03" exitCode=0 Feb 16 15:25:58 crc kubenswrapper[4835]: I0216 15:25:58.068278 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8fld2" event={"ID":"7ca5b397-cd76-4bc1-9552-caecb0f37375","Type":"ContainerDied","Data":"7b104e3ee56b28e6b18dd0aa24c1253ab3520c201cfeaf4c47e730f6b37bdd03"} Feb 16 15:25:58 crc kubenswrapper[4835]: I0216 15:25:58.080924 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"090f6dde-5b4b-4154-8123-6e4ba3d0e295","Type":"ContainerStarted","Data":"d056f12806ae54de0097c6e6e0ee6ca75c285056e5e1d04e50fe02e2259fe295"} Feb 16 15:25:58 crc kubenswrapper[4835]: I0216 15:25:58.120226 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=18.120206339 podStartE2EDuration="18.120206339s" podCreationTimestamp="2026-02-16 15:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:25:58.107682333 +0000 UTC m=+1107.399675228" watchObservedRunningTime="2026-02-16 15:25:58.120206339 +0000 UTC m=+1107.412199234" Feb 16 15:25:58 crc kubenswrapper[4835]: I0216 15:25:58.149146 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-vfkhr" podUID="515e8879-485e-4be3-9fb9-896feb6b2d6e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Feb 16 15:25:58 crc kubenswrapper[4835]: I0216 15:25:58.156825 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d7bb8ff7b-4thhg"] Feb 16 15:25:58 crc kubenswrapper[4835]: W0216 15:25:58.158874 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda64f5816_6cc6_4640_b28b_0f0cfb175d28.slice/crio-177d3ab6f0c315a20233ac7287f71cf35f341afb8ad59efd6cb1e54967da8924 WatchSource:0}: Error finding container 177d3ab6f0c315a20233ac7287f71cf35f341afb8ad59efd6cb1e54967da8924: Status 404 returned error can't find the container with id 177d3ab6f0c315a20233ac7287f71cf35f341afb8ad59efd6cb1e54967da8924 Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.113062 4835 generic.go:334] "Generic (PLEG): container finished" podID="48036321-6092-4b5b-9467-af594e089508" containerID="4e526dec782cd183a12ce2f5b7863274677827d3cbe4d522dbe2f18ffaceb5da" exitCode=0 Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.113617 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" event={"ID":"48036321-6092-4b5b-9467-af594e089508","Type":"ContainerDied","Data":"4e526dec782cd183a12ce2f5b7863274677827d3cbe4d522dbe2f18ffaceb5da"} Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.113665 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" event={"ID":"48036321-6092-4b5b-9467-af594e089508","Type":"ContainerStarted","Data":"62421c5f23e5282a07c40d8809cb2e19fa013dd425f0aca8da00d920e490cfc8"} Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.120658 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d7bb8ff7b-4thhg" event={"ID":"a64f5816-6cc6-4640-b28b-0f0cfb175d28","Type":"ContainerStarted","Data":"f23072549f6264f285495852eaff84326fab8c4319020a70228414ebff03a7a4"} Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.120691 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d7bb8ff7b-4thhg" event={"ID":"a64f5816-6cc6-4640-b28b-0f0cfb175d28","Type":"ContainerStarted","Data":"b32cf668cf1ac36ec38f3ff7296b33f8112519a8b366fb61e52ee4f93183c604"} Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.120726 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d7bb8ff7b-4thhg" event={"ID":"a64f5816-6cc6-4640-b28b-0f0cfb175d28","Type":"ContainerStarted","Data":"177d3ab6f0c315a20233ac7287f71cf35f341afb8ad59efd6cb1e54967da8924"} Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.175557 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5d7bb8ff7b-4thhg" podStartSLOduration=2.175520972 podStartE2EDuration="2.175520972s" podCreationTimestamp="2026-02-16 15:25:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:25:59.157194515 +0000 UTC m=+1108.449187410" watchObservedRunningTime="2026-02-16 15:25:59.175520972 +0000 UTC m=+1108.467513867" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.252736 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6796c594c9-9kk2v"] Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.254479 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.259814 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.260049 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.263552 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-public-tls-certs\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.263609 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-ovndb-tls-certs\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.263697 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-config\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.263717 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-httpd-config\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.263734 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-combined-ca-bundle\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.263778 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-internal-tls-certs\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.263805 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96hr9\" (UniqueName: \"kubernetes.io/projected/7a0d8af0-945b-4b99-881f-06f183195461-kube-api-access-96hr9\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.265867 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6796c594c9-9kk2v"] Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.365017 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-public-tls-certs\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.365374 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-ovndb-tls-certs\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.365429 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-config\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.365448 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-httpd-config\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.365466 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-combined-ca-bundle\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.365508 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-internal-tls-certs\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.365551 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96hr9\" (UniqueName: \"kubernetes.io/projected/7a0d8af0-945b-4b99-881f-06f183195461-kube-api-access-96hr9\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.377396 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-ovndb-tls-certs\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.377398 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-public-tls-certs\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.386722 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-config\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.390451 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-internal-tls-certs\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.390608 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-combined-ca-bundle\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.391591 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96hr9\" (UniqueName: \"kubernetes.io/projected/7a0d8af0-945b-4b99-881f-06f183195461-kube-api-access-96hr9\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.398164 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-httpd-config\") pod \"neutron-6796c594c9-9kk2v\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.607296 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.851898 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8fld2" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.986333 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca5b397-cd76-4bc1-9552-caecb0f37375-logs\") pod \"7ca5b397-cd76-4bc1-9552-caecb0f37375\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.986422 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-combined-ca-bundle\") pod \"7ca5b397-cd76-4bc1-9552-caecb0f37375\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.986476 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-scripts\") pod \"7ca5b397-cd76-4bc1-9552-caecb0f37375\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.986601 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-config-data\") pod \"7ca5b397-cd76-4bc1-9552-caecb0f37375\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.986703 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw79k\" (UniqueName: \"kubernetes.io/projected/7ca5b397-cd76-4bc1-9552-caecb0f37375-kube-api-access-fw79k\") pod \"7ca5b397-cd76-4bc1-9552-caecb0f37375\" (UID: \"7ca5b397-cd76-4bc1-9552-caecb0f37375\") " Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.986738 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ca5b397-cd76-4bc1-9552-caecb0f37375-logs" (OuterVolumeSpecName: "logs") pod "7ca5b397-cd76-4bc1-9552-caecb0f37375" (UID: "7ca5b397-cd76-4bc1-9552-caecb0f37375"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:25:59 crc kubenswrapper[4835]: I0216 15:25:59.987746 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca5b397-cd76-4bc1-9552-caecb0f37375-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.003175 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca5b397-cd76-4bc1-9552-caecb0f37375-kube-api-access-fw79k" (OuterVolumeSpecName: "kube-api-access-fw79k") pod "7ca5b397-cd76-4bc1-9552-caecb0f37375" (UID: "7ca5b397-cd76-4bc1-9552-caecb0f37375"). InnerVolumeSpecName "kube-api-access-fw79k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.012280 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-scripts" (OuterVolumeSpecName: "scripts") pod "7ca5b397-cd76-4bc1-9552-caecb0f37375" (UID: "7ca5b397-cd76-4bc1-9552-caecb0f37375"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.024487 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ca5b397-cd76-4bc1-9552-caecb0f37375" (UID: "7ca5b397-cd76-4bc1-9552-caecb0f37375"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.024581 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-config-data" (OuterVolumeSpecName: "config-data") pod "7ca5b397-cd76-4bc1-9552-caecb0f37375" (UID: "7ca5b397-cd76-4bc1-9552-caecb0f37375"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.089362 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.089753 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.089765 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca5b397-cd76-4bc1-9552-caecb0f37375-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.089776 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw79k\" (UniqueName: \"kubernetes.io/projected/7ca5b397-cd76-4bc1-9552-caecb0f37375-kube-api-access-fw79k\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.135964 4835 generic.go:334] "Generic (PLEG): container finished" podID="1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3" containerID="2e64983c2028c67df9406fd037236b4fe861334b06649e63b31b9656798f1ed1" exitCode=0 Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.136023 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2pdkm" event={"ID":"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3","Type":"ContainerDied","Data":"2e64983c2028c67df9406fd037236b4fe861334b06649e63b31b9656798f1ed1"} Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.140240 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" event={"ID":"48036321-6092-4b5b-9467-af594e089508","Type":"ContainerStarted","Data":"b24a289dea697393820c2aae69c24c03c0ad7d791fea5dd3069c64f04ccc49ce"} Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.140390 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.148453 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8fld2" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.148496 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8fld2" event={"ID":"7ca5b397-cd76-4bc1-9552-caecb0f37375","Type":"ContainerDied","Data":"2ff6738cb49b3123adb2e479c26b40b69d2739131e698d066969edd2294ef4b6"} Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.148517 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ff6738cb49b3123adb2e479c26b40b69d2739131e698d066969edd2294ef4b6" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.148558 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.168197 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8484d6fc46-gc2kd"] Feb 16 15:26:00 crc kubenswrapper[4835]: E0216 15:26:00.169430 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca5b397-cd76-4bc1-9552-caecb0f37375" containerName="placement-db-sync" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.169451 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca5b397-cd76-4bc1-9552-caecb0f37375" containerName="placement-db-sync" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.170459 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ca5b397-cd76-4bc1-9552-caecb0f37375" containerName="placement-db-sync" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.179127 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.182125 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.182158 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.187448 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.188140 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.188349 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2fddv" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.189868 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.190068 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.192934 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8484d6fc46-gc2kd"] Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.223756 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" podStartSLOduration=4.223733751 podStartE2EDuration="4.223733751s" podCreationTimestamp="2026-02-16 15:25:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:26:00.208929776 +0000 UTC m=+1109.500922681" watchObservedRunningTime="2026-02-16 15:26:00.223733751 +0000 UTC m=+1109.515726646" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.254165 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.265568 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.293460 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43efd331-5c78-4ef6-9967-efb83d49f605-logs\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.293567 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-internal-tls-certs\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.293646 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-config-data\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.293675 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnwsx\" (UniqueName: \"kubernetes.io/projected/43efd331-5c78-4ef6-9967-efb83d49f605-kube-api-access-fnwsx\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.293723 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-public-tls-certs\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.293756 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-combined-ca-bundle\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.293794 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-scripts\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.295492 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6796c594c9-9kk2v"] Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.395794 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43efd331-5c78-4ef6-9967-efb83d49f605-logs\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.395887 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-internal-tls-certs\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.395960 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-config-data\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.396002 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnwsx\" (UniqueName: \"kubernetes.io/projected/43efd331-5c78-4ef6-9967-efb83d49f605-kube-api-access-fnwsx\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.396046 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-public-tls-certs\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.396070 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-combined-ca-bundle\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.396104 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-scripts\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.396182 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43efd331-5c78-4ef6-9967-efb83d49f605-logs\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.400185 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-internal-tls-certs\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.400759 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-scripts\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.400951 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-combined-ca-bundle\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.401962 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-public-tls-certs\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.402257 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-config-data\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.412811 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnwsx\" (UniqueName: \"kubernetes.io/projected/43efd331-5c78-4ef6-9967-efb83d49f605-kube-api-access-fnwsx\") pod \"placement-8484d6fc46-gc2kd\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.516564 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.516806 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.517231 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.557921 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 15:26:00 crc kubenswrapper[4835]: I0216 15:26:00.565843 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 15:26:01 crc kubenswrapper[4835]: I0216 15:26:01.159599 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6796c594c9-9kk2v" event={"ID":"7a0d8af0-945b-4b99-881f-06f183195461","Type":"ContainerStarted","Data":"13a0842d6754f789c1e890d807b4eba2cebea15ec8ac762b9805a7aaec2f4ea2"} Feb 16 15:26:01 crc kubenswrapper[4835]: I0216 15:26:01.160272 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 15:26:01 crc kubenswrapper[4835]: I0216 15:26:01.160289 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 15:26:01 crc kubenswrapper[4835]: I0216 15:26:01.160401 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 15:26:01 crc kubenswrapper[4835]: I0216 15:26:01.160596 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 15:26:03 crc kubenswrapper[4835]: I0216 15:26:03.038521 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 15:26:03 crc kubenswrapper[4835]: I0216 15:26:03.039721 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 15:26:03 crc kubenswrapper[4835]: I0216 15:26:03.364874 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 15:26:03 crc kubenswrapper[4835]: I0216 15:26:03.366263 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 15:26:05 crc kubenswrapper[4835]: E0216 15:26:05.380411 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.553221 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.737213 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-credential-keys\") pod \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.737574 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-combined-ca-bundle\") pod \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.737671 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwb8g\" (UniqueName: \"kubernetes.io/projected/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-kube-api-access-wwb8g\") pod \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.737761 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-config-data\") pod \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.737803 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-fernet-keys\") pod \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.737861 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-scripts\") pod \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\" (UID: \"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3\") " Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.740324 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3" (UID: "1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.742471 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3" (UID: "1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.743309 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-kube-api-access-wwb8g" (OuterVolumeSpecName: "kube-api-access-wwb8g") pod "1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3" (UID: "1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3"). InnerVolumeSpecName "kube-api-access-wwb8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.743449 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-scripts" (OuterVolumeSpecName: "scripts") pod "1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3" (UID: "1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.763219 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-config-data" (OuterVolumeSpecName: "config-data") pod "1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3" (UID: "1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.788888 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3" (UID: "1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.832643 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8484d6fc46-gc2kd"] Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.839908 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.839948 4835 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.839960 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.839972 4835 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.839988 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:06 crc kubenswrapper[4835]: I0216 15:26:06.839998 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwb8g\" (UniqueName: \"kubernetes.io/projected/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3-kube-api-access-wwb8g\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:06 crc kubenswrapper[4835]: W0216 15:26:06.840155 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43efd331_5c78_4ef6_9967_efb83d49f605.slice/crio-f071ef05c3ec80e8bfda16314acf511b03c422a2da1419b004ce94e3cb1415fc WatchSource:0}: Error finding container f071ef05c3ec80e8bfda16314acf511b03c422a2da1419b004ce94e3cb1415fc: Status 404 returned error can't find the container with id f071ef05c3ec80e8bfda16314acf511b03c422a2da1419b004ce94e3cb1415fc Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.230340 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6796c594c9-9kk2v" event={"ID":"7a0d8af0-945b-4b99-881f-06f183195461","Type":"ContainerStarted","Data":"6299b288c705a8c8ec20ace903a9572ff64be7b0b9a433c0a95549b83586da8d"} Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.230761 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.230800 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6796c594c9-9kk2v" event={"ID":"7a0d8af0-945b-4b99-881f-06f183195461","Type":"ContainerStarted","Data":"56fe9445a95d69eaacf7bee6726687a08f5947747ac2d1f640b91d8e4735dd00"} Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.232439 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8484d6fc46-gc2kd" event={"ID":"43efd331-5c78-4ef6-9967-efb83d49f605","Type":"ContainerStarted","Data":"a2fb2f10e95c4449c8d9ace8caa8af2482330e3483e040189377e64ea2e7d217"} Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.232466 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8484d6fc46-gc2kd" event={"ID":"43efd331-5c78-4ef6-9967-efb83d49f605","Type":"ContainerStarted","Data":"68ddc26f2552d702bd4ded6072e6738fd0ac554f91ccb992af69316da25d0bca"} Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.232475 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8484d6fc46-gc2kd" event={"ID":"43efd331-5c78-4ef6-9967-efb83d49f605","Type":"ContainerStarted","Data":"f071ef05c3ec80e8bfda16314acf511b03c422a2da1419b004ce94e3cb1415fc"} Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.233357 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.233383 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.236373 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2pdkm" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.236407 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2pdkm" event={"ID":"1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3","Type":"ContainerDied","Data":"49cf9b021fbffdbb83e0a130065eb52eea2d72ecf2dcc1eb170e733b8c812343"} Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.236445 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49cf9b021fbffdbb83e0a130065eb52eea2d72ecf2dcc1eb170e733b8c812343" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.249023 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c","Type":"ContainerStarted","Data":"d9f014429de047bddcb35b2921947f6cc45d46fbd995b5a03d4c39b44c55ae43"} Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.263820 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6796c594c9-9kk2v" podStartSLOduration=8.263810889 podStartE2EDuration="8.263810889s" podCreationTimestamp="2026-02-16 15:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:26:07.249874676 +0000 UTC m=+1116.541867561" watchObservedRunningTime="2026-02-16 15:26:07.263810889 +0000 UTC m=+1116.555803784" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.293250 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8484d6fc46-gc2kd" podStartSLOduration=7.293227675 podStartE2EDuration="7.293227675s" podCreationTimestamp="2026-02-16 15:26:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:26:07.279264431 +0000 UTC m=+1116.571257346" watchObservedRunningTime="2026-02-16 15:26:07.293227675 +0000 UTC m=+1116.585220570" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.315697 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.424795 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-6rgj6"] Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.425026 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" podUID="d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf" containerName="dnsmasq-dns" containerID="cri-o://5f3032dc334b4f2a7c9663ae337087702270daacfefa48a335810947e1ea1ec9" gracePeriod=10 Feb 16 15:26:07 crc kubenswrapper[4835]: E0216 15:26:07.470429 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c74ec26_1a6a_4e4d_be0f_b53bfd4864e3.slice\": RecentStats: unable to find data in memory cache]" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.717707 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6f568f5d7f-9h6bh"] Feb 16 15:26:07 crc kubenswrapper[4835]: E0216 15:26:07.718603 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3" containerName="keystone-bootstrap" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.718615 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3" containerName="keystone-bootstrap" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.718813 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3" containerName="keystone-bootstrap" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.719598 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.723667 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.723939 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.724041 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.724124 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.724424 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.724558 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-78dkx" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.728634 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f568f5d7f-9h6bh"] Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.866297 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-public-tls-certs\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.866367 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-config-data\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.866416 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-internal-tls-certs\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.866476 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz84m\" (UniqueName: \"kubernetes.io/projected/1250312c-ef9e-416f-a06d-72d1d31f433f-kube-api-access-vz84m\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.866520 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-fernet-keys\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.866568 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-credential-keys\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.866608 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-scripts\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.866626 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-combined-ca-bundle\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.969580 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-internal-tls-certs\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.970521 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz84m\" (UniqueName: \"kubernetes.io/projected/1250312c-ef9e-416f-a06d-72d1d31f433f-kube-api-access-vz84m\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.970579 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-fernet-keys\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.970641 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-credential-keys\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.970664 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-scripts\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.970681 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-combined-ca-bundle\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.970774 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-public-tls-certs\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.970825 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-config-data\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.976389 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-combined-ca-bundle\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.976428 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-config-data\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.976439 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-fernet-keys\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.978070 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-internal-tls-certs\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.978084 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-scripts\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.978259 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-public-tls-certs\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.978913 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1250312c-ef9e-416f-a06d-72d1d31f433f-credential-keys\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:07 crc kubenswrapper[4835]: I0216 15:26:07.988619 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz84m\" (UniqueName: \"kubernetes.io/projected/1250312c-ef9e-416f-a06d-72d1d31f433f-kube-api-access-vz84m\") pod \"keystone-6f568f5d7f-9h6bh\" (UID: \"1250312c-ef9e-416f-a06d-72d1d31f433f\") " pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.037911 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.123780 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.292120 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nnbr\" (UniqueName: \"kubernetes.io/projected/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-kube-api-access-6nnbr\") pod \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.292182 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-dns-svc\") pod \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.292240 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-config\") pod \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.292283 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-dns-swift-storage-0\") pod \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.292426 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-ovsdbserver-nb\") pod \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.292443 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-ovsdbserver-sb\") pod \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\" (UID: \"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf\") " Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.324862 4835 generic.go:334] "Generic (PLEG): container finished" podID="d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf" containerID="5f3032dc334b4f2a7c9663ae337087702270daacfefa48a335810947e1ea1ec9" exitCode=0 Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.326126 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.326521 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" event={"ID":"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf","Type":"ContainerDied","Data":"5f3032dc334b4f2a7c9663ae337087702270daacfefa48a335810947e1ea1ec9"} Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.326643 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-6rgj6" event={"ID":"d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf","Type":"ContainerDied","Data":"dac223a53c727c5dee259108b7ac7c8a8383a8589f9badfb9d3be254b33c0a16"} Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.326707 4835 scope.go:117] "RemoveContainer" containerID="5f3032dc334b4f2a7c9663ae337087702270daacfefa48a335810947e1ea1ec9" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.334440 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-kube-api-access-6nnbr" (OuterVolumeSpecName: "kube-api-access-6nnbr") pod "d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf" (UID: "d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf"). InnerVolumeSpecName "kube-api-access-6nnbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.381172 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf" (UID: "d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.395736 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nnbr\" (UniqueName: \"kubernetes.io/projected/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-kube-api-access-6nnbr\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.395966 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.404413 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf" (UID: "d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.464743 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf" (UID: "d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.479848 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-config" (OuterVolumeSpecName: "config") pod "d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf" (UID: "d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.491731 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf" (UID: "d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.512152 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.512181 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.512192 4835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.512202 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.563795 4835 scope.go:117] "RemoveContainer" containerID="8ff4ed229b49d4b210ee26873f67fde6e49916fda7c9ecb61f11dc1e3b059cc1" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.602408 4835 scope.go:117] "RemoveContainer" containerID="5f3032dc334b4f2a7c9663ae337087702270daacfefa48a335810947e1ea1ec9" Feb 16 15:26:08 crc kubenswrapper[4835]: E0216 15:26:08.604803 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f3032dc334b4f2a7c9663ae337087702270daacfefa48a335810947e1ea1ec9\": container with ID starting with 5f3032dc334b4f2a7c9663ae337087702270daacfefa48a335810947e1ea1ec9 not found: ID does not exist" containerID="5f3032dc334b4f2a7c9663ae337087702270daacfefa48a335810947e1ea1ec9" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.604870 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f3032dc334b4f2a7c9663ae337087702270daacfefa48a335810947e1ea1ec9"} err="failed to get container status \"5f3032dc334b4f2a7c9663ae337087702270daacfefa48a335810947e1ea1ec9\": rpc error: code = NotFound desc = could not find container \"5f3032dc334b4f2a7c9663ae337087702270daacfefa48a335810947e1ea1ec9\": container with ID starting with 5f3032dc334b4f2a7c9663ae337087702270daacfefa48a335810947e1ea1ec9 not found: ID does not exist" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.604909 4835 scope.go:117] "RemoveContainer" containerID="8ff4ed229b49d4b210ee26873f67fde6e49916fda7c9ecb61f11dc1e3b059cc1" Feb 16 15:26:08 crc kubenswrapper[4835]: E0216 15:26:08.605255 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ff4ed229b49d4b210ee26873f67fde6e49916fda7c9ecb61f11dc1e3b059cc1\": container with ID starting with 8ff4ed229b49d4b210ee26873f67fde6e49916fda7c9ecb61f11dc1e3b059cc1 not found: ID does not exist" containerID="8ff4ed229b49d4b210ee26873f67fde6e49916fda7c9ecb61f11dc1e3b059cc1" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.605281 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ff4ed229b49d4b210ee26873f67fde6e49916fda7c9ecb61f11dc1e3b059cc1"} err="failed to get container status \"8ff4ed229b49d4b210ee26873f67fde6e49916fda7c9ecb61f11dc1e3b059cc1\": rpc error: code = NotFound desc = could not find container \"8ff4ed229b49d4b210ee26873f67fde6e49916fda7c9ecb61f11dc1e3b059cc1\": container with ID starting with 8ff4ed229b49d4b210ee26873f67fde6e49916fda7c9ecb61f11dc1e3b059cc1 not found: ID does not exist" Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.639860 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f568f5d7f-9h6bh"] Feb 16 15:26:08 crc kubenswrapper[4835]: W0216 15:26:08.650090 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1250312c_ef9e_416f_a06d_72d1d31f433f.slice/crio-6f30f8300147afcc738959355e99ba99faa4c82eddc1571659ccb1072cc3c743 WatchSource:0}: Error finding container 6f30f8300147afcc738959355e99ba99faa4c82eddc1571659ccb1072cc3c743: Status 404 returned error can't find the container with id 6f30f8300147afcc738959355e99ba99faa4c82eddc1571659ccb1072cc3c743 Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.662815 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-6rgj6"] Feb 16 15:26:08 crc kubenswrapper[4835]: I0216 15:26:08.670863 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-6rgj6"] Feb 16 15:26:09 crc kubenswrapper[4835]: I0216 15:26:09.345237 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7zndp" event={"ID":"f8d68cbc-724d-490f-ae49-654aac2eb8ba","Type":"ContainerStarted","Data":"fc84930b937ca4753ccd161c123df25f946e19625ce483e5f921c5ef27c4e41f"} Feb 16 15:26:09 crc kubenswrapper[4835]: I0216 15:26:09.355126 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f568f5d7f-9h6bh" event={"ID":"1250312c-ef9e-416f-a06d-72d1d31f433f","Type":"ContainerStarted","Data":"279205efff1b9daf1b0a6080aa417dd6d0b005f8333da663c1ef873bc60ec2e4"} Feb 16 15:26:09 crc kubenswrapper[4835]: I0216 15:26:09.355176 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f568f5d7f-9h6bh" event={"ID":"1250312c-ef9e-416f-a06d-72d1d31f433f","Type":"ContainerStarted","Data":"6f30f8300147afcc738959355e99ba99faa4c82eddc1571659ccb1072cc3c743"} Feb 16 15:26:09 crc kubenswrapper[4835]: I0216 15:26:09.355796 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:09 crc kubenswrapper[4835]: I0216 15:26:09.363310 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-7zndp" podStartSLOduration=2.189158188 podStartE2EDuration="35.363294085s" podCreationTimestamp="2026-02-16 15:25:34 +0000 UTC" firstStartedPulling="2026-02-16 15:25:35.695129858 +0000 UTC m=+1084.987122753" lastFinishedPulling="2026-02-16 15:26:08.869265755 +0000 UTC m=+1118.161258650" observedRunningTime="2026-02-16 15:26:09.360049491 +0000 UTC m=+1118.652042386" watchObservedRunningTime="2026-02-16 15:26:09.363294085 +0000 UTC m=+1118.655286980" Feb 16 15:26:09 crc kubenswrapper[4835]: I0216 15:26:09.376415 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6f568f5d7f-9h6bh" podStartSLOduration=2.376396546 podStartE2EDuration="2.376396546s" podCreationTimestamp="2026-02-16 15:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:26:09.375224496 +0000 UTC m=+1118.667217391" watchObservedRunningTime="2026-02-16 15:26:09.376396546 +0000 UTC m=+1118.668389441" Feb 16 15:26:09 crc kubenswrapper[4835]: I0216 15:26:09.397052 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf" path="/var/lib/kubelet/pods/d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf/volumes" Feb 16 15:26:10 crc kubenswrapper[4835]: I0216 15:26:10.424767 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bmq2c" event={"ID":"eeb4f111-43c6-46d3-aa98-82d93b71b723","Type":"ContainerStarted","Data":"c414f12ec2c165eef4869643300f36c8d109643185c54b3c088b39babfc857e5"} Feb 16 15:26:10 crc kubenswrapper[4835]: I0216 15:26:10.450555 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-bmq2c" podStartSLOduration=3.722630255 podStartE2EDuration="37.4505213s" podCreationTimestamp="2026-02-16 15:25:33 +0000 UTC" firstStartedPulling="2026-02-16 15:25:35.140688402 +0000 UTC m=+1084.432681297" lastFinishedPulling="2026-02-16 15:26:08.868579447 +0000 UTC m=+1118.160572342" observedRunningTime="2026-02-16 15:26:10.44939399 +0000 UTC m=+1119.741386875" watchObservedRunningTime="2026-02-16 15:26:10.4505213 +0000 UTC m=+1119.742514195" Feb 16 15:26:11 crc kubenswrapper[4835]: I0216 15:26:11.436780 4835 generic.go:334] "Generic (PLEG): container finished" podID="f8d68cbc-724d-490f-ae49-654aac2eb8ba" containerID="fc84930b937ca4753ccd161c123df25f946e19625ce483e5f921c5ef27c4e41f" exitCode=0 Feb 16 15:26:11 crc kubenswrapper[4835]: I0216 15:26:11.436862 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7zndp" event={"ID":"f8d68cbc-724d-490f-ae49-654aac2eb8ba","Type":"ContainerDied","Data":"fc84930b937ca4753ccd161c123df25f946e19625ce483e5f921c5ef27c4e41f"} Feb 16 15:26:14 crc kubenswrapper[4835]: I0216 15:26:14.129913 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7zndp" Feb 16 15:26:14 crc kubenswrapper[4835]: I0216 15:26:14.205081 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f8d68cbc-724d-490f-ae49-654aac2eb8ba-db-sync-config-data\") pod \"f8d68cbc-724d-490f-ae49-654aac2eb8ba\" (UID: \"f8d68cbc-724d-490f-ae49-654aac2eb8ba\") " Feb 16 15:26:14 crc kubenswrapper[4835]: I0216 15:26:14.205199 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d68cbc-724d-490f-ae49-654aac2eb8ba-combined-ca-bundle\") pod \"f8d68cbc-724d-490f-ae49-654aac2eb8ba\" (UID: \"f8d68cbc-724d-490f-ae49-654aac2eb8ba\") " Feb 16 15:26:14 crc kubenswrapper[4835]: I0216 15:26:14.205277 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz4nm\" (UniqueName: \"kubernetes.io/projected/f8d68cbc-724d-490f-ae49-654aac2eb8ba-kube-api-access-dz4nm\") pod \"f8d68cbc-724d-490f-ae49-654aac2eb8ba\" (UID: \"f8d68cbc-724d-490f-ae49-654aac2eb8ba\") " Feb 16 15:26:14 crc kubenswrapper[4835]: I0216 15:26:14.210668 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d68cbc-724d-490f-ae49-654aac2eb8ba-kube-api-access-dz4nm" (OuterVolumeSpecName: "kube-api-access-dz4nm") pod "f8d68cbc-724d-490f-ae49-654aac2eb8ba" (UID: "f8d68cbc-724d-490f-ae49-654aac2eb8ba"). InnerVolumeSpecName "kube-api-access-dz4nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:14 crc kubenswrapper[4835]: I0216 15:26:14.212744 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d68cbc-724d-490f-ae49-654aac2eb8ba-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f8d68cbc-724d-490f-ae49-654aac2eb8ba" (UID: "f8d68cbc-724d-490f-ae49-654aac2eb8ba"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:14 crc kubenswrapper[4835]: I0216 15:26:14.235742 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d68cbc-724d-490f-ae49-654aac2eb8ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8d68cbc-724d-490f-ae49-654aac2eb8ba" (UID: "f8d68cbc-724d-490f-ae49-654aac2eb8ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:14 crc kubenswrapper[4835]: I0216 15:26:14.307751 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d68cbc-724d-490f-ae49-654aac2eb8ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:14 crc kubenswrapper[4835]: I0216 15:26:14.308068 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz4nm\" (UniqueName: \"kubernetes.io/projected/f8d68cbc-724d-490f-ae49-654aac2eb8ba-kube-api-access-dz4nm\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:14 crc kubenswrapper[4835]: I0216 15:26:14.308080 4835 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f8d68cbc-724d-490f-ae49-654aac2eb8ba-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:14 crc kubenswrapper[4835]: I0216 15:26:14.492777 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7zndp" event={"ID":"f8d68cbc-724d-490f-ae49-654aac2eb8ba","Type":"ContainerDied","Data":"93f1fac06fa81dae59d65b25770a10d93b4007947b50684b24e4851ed25dfd78"} Feb 16 15:26:14 crc kubenswrapper[4835]: I0216 15:26:14.492919 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93f1fac06fa81dae59d65b25770a10d93b4007947b50684b24e4851ed25dfd78" Feb 16 15:26:14 crc kubenswrapper[4835]: I0216 15:26:14.492986 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7zndp" Feb 16 15:26:14 crc kubenswrapper[4835]: I0216 15:26:14.497508 4835 generic.go:334] "Generic (PLEG): container finished" podID="eeb4f111-43c6-46d3-aa98-82d93b71b723" containerID="c414f12ec2c165eef4869643300f36c8d109643185c54b3c088b39babfc857e5" exitCode=0 Feb 16 15:26:14 crc kubenswrapper[4835]: I0216 15:26:14.497564 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bmq2c" event={"ID":"eeb4f111-43c6-46d3-aa98-82d93b71b723","Type":"ContainerDied","Data":"c414f12ec2c165eef4869643300f36c8d109643185c54b3c088b39babfc857e5"} Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.395371 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6c8dc89fcf-gqdlj"] Feb 16 15:26:15 crc kubenswrapper[4835]: E0216 15:26:15.396067 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf" containerName="dnsmasq-dns" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.396083 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf" containerName="dnsmasq-dns" Feb 16 15:26:15 crc kubenswrapper[4835]: E0216 15:26:15.396105 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf" containerName="init" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.396113 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf" containerName="init" Feb 16 15:26:15 crc kubenswrapper[4835]: E0216 15:26:15.396130 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d68cbc-724d-490f-ae49-654aac2eb8ba" containerName="barbican-db-sync" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.396139 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d68cbc-724d-490f-ae49-654aac2eb8ba" containerName="barbican-db-sync" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.396425 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d68cbc-724d-490f-ae49-654aac2eb8ba" containerName="barbican-db-sync" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.396449 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4480fa9-0e71-4c69-8ef7-2ac9e46a7dcf" containerName="dnsmasq-dns" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.397979 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.413061 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.413389 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qxhm8" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.414201 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.431644 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b73d45a-cedb-4986-b66a-89a4aa44c1c5-config-data-custom\") pod \"barbican-worker-6c8dc89fcf-gqdlj\" (UID: \"6b73d45a-cedb-4986-b66a-89a4aa44c1c5\") " pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.431694 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b73d45a-cedb-4986-b66a-89a4aa44c1c5-logs\") pod \"barbican-worker-6c8dc89fcf-gqdlj\" (UID: \"6b73d45a-cedb-4986-b66a-89a4aa44c1c5\") " pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.431753 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b73d45a-cedb-4986-b66a-89a4aa44c1c5-config-data\") pod \"barbican-worker-6c8dc89fcf-gqdlj\" (UID: \"6b73d45a-cedb-4986-b66a-89a4aa44c1c5\") " pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.431833 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9979\" (UniqueName: \"kubernetes.io/projected/6b73d45a-cedb-4986-b66a-89a4aa44c1c5-kube-api-access-j9979\") pod \"barbican-worker-6c8dc89fcf-gqdlj\" (UID: \"6b73d45a-cedb-4986-b66a-89a4aa44c1c5\") " pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.431912 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b73d45a-cedb-4986-b66a-89a4aa44c1c5-combined-ca-bundle\") pod \"barbican-worker-6c8dc89fcf-gqdlj\" (UID: \"6b73d45a-cedb-4986-b66a-89a4aa44c1c5\") " pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.434595 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7764659d9b-m4t6j"] Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.436326 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.442483 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.457377 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6c8dc89fcf-gqdlj"] Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.494594 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7764659d9b-m4t6j"] Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.516750 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c","Type":"ContainerStarted","Data":"9a08096f39f4abe3ce4c3e8e4411ab6e7fba316f41dc9aca7556ec0147764a11"} Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.516837 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.516821 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerName="ceilometer-central-agent" containerID="cri-o://cf2554531880261847bbbeb289e4e0c19db7f8da63dc773c2e5dc287f04343c9" gracePeriod=30 Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.516965 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerName="proxy-httpd" containerID="cri-o://9a08096f39f4abe3ce4c3e8e4411ab6e7fba316f41dc9aca7556ec0147764a11" gracePeriod=30 Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.517091 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerName="sg-core" containerID="cri-o://d9f014429de047bddcb35b2921947f6cc45d46fbd995b5a03d4c39b44c55ae43" gracePeriod=30 Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.517130 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerName="ceilometer-notification-agent" containerID="cri-o://9318fe45fa94086436b9c5962775be17396029603e277fad7f493f3d8f65ca84" gracePeriod=30 Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.536085 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b73d45a-cedb-4986-b66a-89a4aa44c1c5-config-data\") pod \"barbican-worker-6c8dc89fcf-gqdlj\" (UID: \"6b73d45a-cedb-4986-b66a-89a4aa44c1c5\") " pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.536134 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e52d59-832d-4a4a-ab60-b288415a7622-combined-ca-bundle\") pod \"barbican-keystone-listener-7764659d9b-m4t6j\" (UID: \"b2e52d59-832d-4a4a-ab60-b288415a7622\") " pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.536173 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2e52d59-832d-4a4a-ab60-b288415a7622-logs\") pod \"barbican-keystone-listener-7764659d9b-m4t6j\" (UID: \"b2e52d59-832d-4a4a-ab60-b288415a7622\") " pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.536191 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2e52d59-832d-4a4a-ab60-b288415a7622-config-data-custom\") pod \"barbican-keystone-listener-7764659d9b-m4t6j\" (UID: \"b2e52d59-832d-4a4a-ab60-b288415a7622\") " pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.536226 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9979\" (UniqueName: \"kubernetes.io/projected/6b73d45a-cedb-4986-b66a-89a4aa44c1c5-kube-api-access-j9979\") pod \"barbican-worker-6c8dc89fcf-gqdlj\" (UID: \"6b73d45a-cedb-4986-b66a-89a4aa44c1c5\") " pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.536259 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r4rf\" (UniqueName: \"kubernetes.io/projected/b2e52d59-832d-4a4a-ab60-b288415a7622-kube-api-access-5r4rf\") pod \"barbican-keystone-listener-7764659d9b-m4t6j\" (UID: \"b2e52d59-832d-4a4a-ab60-b288415a7622\") " pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.536276 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e52d59-832d-4a4a-ab60-b288415a7622-config-data\") pod \"barbican-keystone-listener-7764659d9b-m4t6j\" (UID: \"b2e52d59-832d-4a4a-ab60-b288415a7622\") " pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.536320 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b73d45a-cedb-4986-b66a-89a4aa44c1c5-combined-ca-bundle\") pod \"barbican-worker-6c8dc89fcf-gqdlj\" (UID: \"6b73d45a-cedb-4986-b66a-89a4aa44c1c5\") " pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.536350 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b73d45a-cedb-4986-b66a-89a4aa44c1c5-config-data-custom\") pod \"barbican-worker-6c8dc89fcf-gqdlj\" (UID: \"6b73d45a-cedb-4986-b66a-89a4aa44c1c5\") " pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.536368 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b73d45a-cedb-4986-b66a-89a4aa44c1c5-logs\") pod \"barbican-worker-6c8dc89fcf-gqdlj\" (UID: \"6b73d45a-cedb-4986-b66a-89a4aa44c1c5\") " pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.536776 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b73d45a-cedb-4986-b66a-89a4aa44c1c5-logs\") pod \"barbican-worker-6c8dc89fcf-gqdlj\" (UID: \"6b73d45a-cedb-4986-b66a-89a4aa44c1c5\") " pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.537389 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-md5rm"] Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.543955 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.550274 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b73d45a-cedb-4986-b66a-89a4aa44c1c5-combined-ca-bundle\") pod \"barbican-worker-6c8dc89fcf-gqdlj\" (UID: \"6b73d45a-cedb-4986-b66a-89a4aa44c1c5\") " pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.551253 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b73d45a-cedb-4986-b66a-89a4aa44c1c5-config-data\") pod \"barbican-worker-6c8dc89fcf-gqdlj\" (UID: \"6b73d45a-cedb-4986-b66a-89a4aa44c1c5\") " pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.561325 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b73d45a-cedb-4986-b66a-89a4aa44c1c5-config-data-custom\") pod \"barbican-worker-6c8dc89fcf-gqdlj\" (UID: \"6b73d45a-cedb-4986-b66a-89a4aa44c1c5\") " pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.571031 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9979\" (UniqueName: \"kubernetes.io/projected/6b73d45a-cedb-4986-b66a-89a4aa44c1c5-kube-api-access-j9979\") pod \"barbican-worker-6c8dc89fcf-gqdlj\" (UID: \"6b73d45a-cedb-4986-b66a-89a4aa44c1c5\") " pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.584602 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-md5rm"] Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.595934 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.323138104 podStartE2EDuration="42.595914983s" podCreationTimestamp="2026-02-16 15:25:33 +0000 UTC" firstStartedPulling="2026-02-16 15:25:35.353750837 +0000 UTC m=+1084.645743732" lastFinishedPulling="2026-02-16 15:26:14.626527716 +0000 UTC m=+1123.918520611" observedRunningTime="2026-02-16 15:26:15.551824355 +0000 UTC m=+1124.843817250" watchObservedRunningTime="2026-02-16 15:26:15.595914983 +0000 UTC m=+1124.887907878" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.638598 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.638898 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e52d59-832d-4a4a-ab60-b288415a7622-combined-ca-bundle\") pod \"barbican-keystone-listener-7764659d9b-m4t6j\" (UID: \"b2e52d59-832d-4a4a-ab60-b288415a7622\") " pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.638940 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2e52d59-832d-4a4a-ab60-b288415a7622-logs\") pod \"barbican-keystone-listener-7764659d9b-m4t6j\" (UID: \"b2e52d59-832d-4a4a-ab60-b288415a7622\") " pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.638963 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2e52d59-832d-4a4a-ab60-b288415a7622-config-data-custom\") pod \"barbican-keystone-listener-7764659d9b-m4t6j\" (UID: \"b2e52d59-832d-4a4a-ab60-b288415a7622\") " pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.639012 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r4rf\" (UniqueName: \"kubernetes.io/projected/b2e52d59-832d-4a4a-ab60-b288415a7622-kube-api-access-5r4rf\") pod \"barbican-keystone-listener-7764659d9b-m4t6j\" (UID: \"b2e52d59-832d-4a4a-ab60-b288415a7622\") " pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.639032 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e52d59-832d-4a4a-ab60-b288415a7622-config-data\") pod \"barbican-keystone-listener-7764659d9b-m4t6j\" (UID: \"b2e52d59-832d-4a4a-ab60-b288415a7622\") " pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.639052 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-config\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.639069 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.639128 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.639158 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-dns-svc\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.639187 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drwmj\" (UniqueName: \"kubernetes.io/projected/b8674bd5-4a3c-4a53-a41d-227ab877fe63-kube-api-access-drwmj\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.641879 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2e52d59-832d-4a4a-ab60-b288415a7622-logs\") pod \"barbican-keystone-listener-7764659d9b-m4t6j\" (UID: \"b2e52d59-832d-4a4a-ab60-b288415a7622\") " pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.645577 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e52d59-832d-4a4a-ab60-b288415a7622-config-data\") pod \"barbican-keystone-listener-7764659d9b-m4t6j\" (UID: \"b2e52d59-832d-4a4a-ab60-b288415a7622\") " pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.656002 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e52d59-832d-4a4a-ab60-b288415a7622-combined-ca-bundle\") pod \"barbican-keystone-listener-7764659d9b-m4t6j\" (UID: \"b2e52d59-832d-4a4a-ab60-b288415a7622\") " pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.660218 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r4rf\" (UniqueName: \"kubernetes.io/projected/b2e52d59-832d-4a4a-ab60-b288415a7622-kube-api-access-5r4rf\") pod \"barbican-keystone-listener-7764659d9b-m4t6j\" (UID: \"b2e52d59-832d-4a4a-ab60-b288415a7622\") " pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.664319 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2e52d59-832d-4a4a-ab60-b288415a7622-config-data-custom\") pod \"barbican-keystone-listener-7764659d9b-m4t6j\" (UID: \"b2e52d59-832d-4a4a-ab60-b288415a7622\") " pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.682398 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-658f7fbf5b-sv58x"] Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.685154 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.691592 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.720310 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-658f7fbf5b-sv58x"] Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.725881 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.741429 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.741497 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm6v5\" (UniqueName: \"kubernetes.io/projected/6bd7f83f-1998-4efd-96eb-3287d2c721c4-kube-api-access-vm6v5\") pod \"barbican-api-658f7fbf5b-sv58x\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.741566 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-config-data\") pod \"barbican-api-658f7fbf5b-sv58x\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.741591 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-config\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.741615 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.741655 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-combined-ca-bundle\") pod \"barbican-api-658f7fbf5b-sv58x\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.741677 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd7f83f-1998-4efd-96eb-3287d2c721c4-logs\") pod \"barbican-api-658f7fbf5b-sv58x\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.741706 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.741726 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-config-data-custom\") pod \"barbican-api-658f7fbf5b-sv58x\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.741752 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-dns-svc\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.741776 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drwmj\" (UniqueName: \"kubernetes.io/projected/b8674bd5-4a3c-4a53-a41d-227ab877fe63-kube-api-access-drwmj\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.742844 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.745944 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.746259 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-config\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.746363 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-dns-svc\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.746672 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.759622 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drwmj\" (UniqueName: \"kubernetes.io/projected/b8674bd5-4a3c-4a53-a41d-227ab877fe63-kube-api-access-drwmj\") pod \"dnsmasq-dns-85ff748b95-md5rm\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.768605 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.845900 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-config-data\") pod \"barbican-api-658f7fbf5b-sv58x\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.846027 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-combined-ca-bundle\") pod \"barbican-api-658f7fbf5b-sv58x\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.846061 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd7f83f-1998-4efd-96eb-3287d2c721c4-logs\") pod \"barbican-api-658f7fbf5b-sv58x\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.846131 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-config-data-custom\") pod \"barbican-api-658f7fbf5b-sv58x\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.846308 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm6v5\" (UniqueName: \"kubernetes.io/projected/6bd7f83f-1998-4efd-96eb-3287d2c721c4-kube-api-access-vm6v5\") pod \"barbican-api-658f7fbf5b-sv58x\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.860615 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-config-data\") pod \"barbican-api-658f7fbf5b-sv58x\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.861081 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-config-data-custom\") pod \"barbican-api-658f7fbf5b-sv58x\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.862866 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-combined-ca-bundle\") pod \"barbican-api-658f7fbf5b-sv58x\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.862961 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd7f83f-1998-4efd-96eb-3287d2c721c4-logs\") pod \"barbican-api-658f7fbf5b-sv58x\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:15 crc kubenswrapper[4835]: I0216 15:26:15.871259 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm6v5\" (UniqueName: \"kubernetes.io/projected/6bd7f83f-1998-4efd-96eb-3287d2c721c4-kube-api-access-vm6v5\") pod \"barbican-api-658f7fbf5b-sv58x\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.030334 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.047422 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.053645 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.155593 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-combined-ca-bundle\") pod \"eeb4f111-43c6-46d3-aa98-82d93b71b723\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.155751 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-scripts\") pod \"eeb4f111-43c6-46d3-aa98-82d93b71b723\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.155803 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-config-data\") pod \"eeb4f111-43c6-46d3-aa98-82d93b71b723\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.155877 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhs8n\" (UniqueName: \"kubernetes.io/projected/eeb4f111-43c6-46d3-aa98-82d93b71b723-kube-api-access-nhs8n\") pod \"eeb4f111-43c6-46d3-aa98-82d93b71b723\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.155928 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eeb4f111-43c6-46d3-aa98-82d93b71b723-etc-machine-id\") pod \"eeb4f111-43c6-46d3-aa98-82d93b71b723\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.155963 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-db-sync-config-data\") pod \"eeb4f111-43c6-46d3-aa98-82d93b71b723\" (UID: \"eeb4f111-43c6-46d3-aa98-82d93b71b723\") " Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.161012 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eeb4f111-43c6-46d3-aa98-82d93b71b723-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "eeb4f111-43c6-46d3-aa98-82d93b71b723" (UID: "eeb4f111-43c6-46d3-aa98-82d93b71b723"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.172863 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb4f111-43c6-46d3-aa98-82d93b71b723-kube-api-access-nhs8n" (OuterVolumeSpecName: "kube-api-access-nhs8n") pod "eeb4f111-43c6-46d3-aa98-82d93b71b723" (UID: "eeb4f111-43c6-46d3-aa98-82d93b71b723"). InnerVolumeSpecName "kube-api-access-nhs8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.195567 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "eeb4f111-43c6-46d3-aa98-82d93b71b723" (UID: "eeb4f111-43c6-46d3-aa98-82d93b71b723"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.215277 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-scripts" (OuterVolumeSpecName: "scripts") pod "eeb4f111-43c6-46d3-aa98-82d93b71b723" (UID: "eeb4f111-43c6-46d3-aa98-82d93b71b723"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.224856 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eeb4f111-43c6-46d3-aa98-82d93b71b723" (UID: "eeb4f111-43c6-46d3-aa98-82d93b71b723"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.234204 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6c8dc89fcf-gqdlj"] Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.259323 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.259350 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhs8n\" (UniqueName: \"kubernetes.io/projected/eeb4f111-43c6-46d3-aa98-82d93b71b723-kube-api-access-nhs8n\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.259374 4835 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eeb4f111-43c6-46d3-aa98-82d93b71b723-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.259384 4835 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.261461 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.274025 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-config-data" (OuterVolumeSpecName: "config-data") pod "eeb4f111-43c6-46d3-aa98-82d93b71b723" (UID: "eeb4f111-43c6-46d3-aa98-82d93b71b723"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.364164 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb4f111-43c6-46d3-aa98-82d93b71b723-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:16 crc kubenswrapper[4835]: W0216 15:26:16.367698 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2e52d59_832d_4a4a_ab60_b288415a7622.slice/crio-716a3d439a311ecd2bfd9ed1980ea6a697365c2f2b29d5202ad2a2b1323c6390 WatchSource:0}: Error finding container 716a3d439a311ecd2bfd9ed1980ea6a697365c2f2b29d5202ad2a2b1323c6390: Status 404 returned error can't find the container with id 716a3d439a311ecd2bfd9ed1980ea6a697365c2f2b29d5202ad2a2b1323c6390 Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.379663 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7764659d9b-m4t6j"] Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.525109 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" event={"ID":"6b73d45a-cedb-4986-b66a-89a4aa44c1c5","Type":"ContainerStarted","Data":"60966bc02e1313e890f95d87661e840c860a248bee88bee995f7c9237cd9bc86"} Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.526816 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" event={"ID":"b2e52d59-832d-4a4a-ab60-b288415a7622","Type":"ContainerStarted","Data":"716a3d439a311ecd2bfd9ed1980ea6a697365c2f2b29d5202ad2a2b1323c6390"} Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.529854 4835 generic.go:334] "Generic (PLEG): container finished" podID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerID="9a08096f39f4abe3ce4c3e8e4411ab6e7fba316f41dc9aca7556ec0147764a11" exitCode=0 Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.529871 4835 generic.go:334] "Generic (PLEG): container finished" podID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerID="d9f014429de047bddcb35b2921947f6cc45d46fbd995b5a03d4c39b44c55ae43" exitCode=2 Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.529879 4835 generic.go:334] "Generic (PLEG): container finished" podID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerID="cf2554531880261847bbbeb289e4e0c19db7f8da63dc773c2e5dc287f04343c9" exitCode=0 Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.529917 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c","Type":"ContainerDied","Data":"9a08096f39f4abe3ce4c3e8e4411ab6e7fba316f41dc9aca7556ec0147764a11"} Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.529935 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c","Type":"ContainerDied","Data":"d9f014429de047bddcb35b2921947f6cc45d46fbd995b5a03d4c39b44c55ae43"} Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.529946 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c","Type":"ContainerDied","Data":"cf2554531880261847bbbeb289e4e0c19db7f8da63dc773c2e5dc287f04343c9"} Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.531445 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bmq2c" event={"ID":"eeb4f111-43c6-46d3-aa98-82d93b71b723","Type":"ContainerDied","Data":"9c2743b80cbf76c4bac5b8cb5f81f458743108a12728a38e3e9081d35955ec0a"} Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.531461 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c2743b80cbf76c4bac5b8cb5f81f458743108a12728a38e3e9081d35955ec0a" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.531512 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bmq2c" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.671576 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-md5rm"] Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.758678 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:26:16 crc kubenswrapper[4835]: E0216 15:26:16.759105 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb4f111-43c6-46d3-aa98-82d93b71b723" containerName="cinder-db-sync" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.759117 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb4f111-43c6-46d3-aa98-82d93b71b723" containerName="cinder-db-sync" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.759333 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb4f111-43c6-46d3-aa98-82d93b71b723" containerName="cinder-db-sync" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.760349 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.767475 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.768155 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.770390 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.775881 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.776129 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-d22pp" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.786773 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-scripts\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.786813 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-config-data\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.786855 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.786875 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.786948 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lpkh\" (UniqueName: \"kubernetes.io/projected/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-kube-api-access-6lpkh\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.786974 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.818098 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-658f7fbf5b-sv58x"] Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.889067 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-md5rm"] Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.890642 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lpkh\" (UniqueName: \"kubernetes.io/projected/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-kube-api-access-6lpkh\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.890702 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.890777 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-scripts\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.890809 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-config-data\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.890868 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.890895 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.891504 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.896623 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-scripts\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.897110 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-config-data\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.898626 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.901356 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.916250 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lpkh\" (UniqueName: \"kubernetes.io/projected/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-kube-api-access-6lpkh\") pod \"cinder-scheduler-0\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.929361 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nx47s"] Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.931288 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.965255 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nx47s"] Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.993209 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.993272 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpp2z\" (UniqueName: \"kubernetes.io/projected/ac798d12-9bfa-4bbd-b013-a91e06a14507-kube-api-access-fpp2z\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.993311 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.993336 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.993370 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-config\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:16 crc kubenswrapper[4835]: I0216 15:26:16.993429 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.084098 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.085567 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.093072 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.101134 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.101227 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.101264 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpp2z\" (UniqueName: \"kubernetes.io/projected/ac798d12-9bfa-4bbd-b013-a91e06a14507-kube-api-access-fpp2z\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.101285 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.101310 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.101343 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-config\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.102165 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-config\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.103377 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.103919 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.105839 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.107076 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.108300 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.126270 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpp2z\" (UniqueName: \"kubernetes.io/projected/ac798d12-9bfa-4bbd-b013-a91e06a14507-kube-api-access-fpp2z\") pod \"dnsmasq-dns-5c9776ccc5-nx47s\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.137044 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.202665 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-config-data\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.202726 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-scripts\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.202770 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c16d9d-7ff4-4112-a586-11c72b643cd5-logs\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.202794 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjh5w\" (UniqueName: \"kubernetes.io/projected/00c16d9d-7ff4-4112-a586-11c72b643cd5-kube-api-access-gjh5w\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.202824 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-config-data-custom\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.202861 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.202900 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00c16d9d-7ff4-4112-a586-11c72b643cd5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.280257 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.305880 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.306709 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00c16d9d-7ff4-4112-a586-11c72b643cd5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.306804 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-config-data\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.306872 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-scripts\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.306978 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c16d9d-7ff4-4112-a586-11c72b643cd5-logs\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.307011 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjh5w\" (UniqueName: \"kubernetes.io/projected/00c16d9d-7ff4-4112-a586-11c72b643cd5-kube-api-access-gjh5w\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.307047 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00c16d9d-7ff4-4112-a586-11c72b643cd5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.307064 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-config-data-custom\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.308741 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c16d9d-7ff4-4112-a586-11c72b643cd5-logs\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.312218 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-scripts\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.312685 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.320590 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-config-data-custom\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.324867 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-config-data\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.334366 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjh5w\" (UniqueName: \"kubernetes.io/projected/00c16d9d-7ff4-4112-a586-11c72b643cd5-kube-api-access-gjh5w\") pod \"cinder-api-0\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.393446 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.565957 4835 generic.go:334] "Generic (PLEG): container finished" podID="b8674bd5-4a3c-4a53-a41d-227ab877fe63" containerID="c124cbb8b7fd7db3455880437e92c0c8b498d33a8b84f03cf093a9089dcb085b" exitCode=0 Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.565991 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-md5rm" event={"ID":"b8674bd5-4a3c-4a53-a41d-227ab877fe63","Type":"ContainerDied","Data":"c124cbb8b7fd7db3455880437e92c0c8b498d33a8b84f03cf093a9089dcb085b"} Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.566031 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-md5rm" event={"ID":"b8674bd5-4a3c-4a53-a41d-227ab877fe63","Type":"ContainerStarted","Data":"dbf80cbd6eb8e2e1fa6bbd80a014a772f3c20f9994e17ef05b970f6c8d719582"} Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.570883 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-658f7fbf5b-sv58x" event={"ID":"6bd7f83f-1998-4efd-96eb-3287d2c721c4","Type":"ContainerStarted","Data":"21a59e9e2b6b7051d51e25711d66dda32b7962fb5d2140fec0d523215ca58872"} Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.570912 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-658f7fbf5b-sv58x" event={"ID":"6bd7f83f-1998-4efd-96eb-3287d2c721c4","Type":"ContainerStarted","Data":"1b185866603ad532ad54735d7e0242fc5b8f22e15a097cd7841891045efb1d1c"} Feb 16 15:26:17 crc kubenswrapper[4835]: E0216 15:26:17.580341 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:26:17 crc kubenswrapper[4835]: E0216 15:26:17.580401 4835 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:26:17 crc kubenswrapper[4835]: E0216 15:26:17.580621 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqdtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-sgzmb_openstack(3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:26:17 crc kubenswrapper[4835]: E0216 15:26:17.582173 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.697291 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.861247 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nx47s"] Feb 16 15:26:17 crc kubenswrapper[4835]: I0216 15:26:17.992478 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:26:18 crc kubenswrapper[4835]: W0216 15:26:18.345615 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab3be26e_b15a_45b0_a4e4_dd3f21ca1302.slice/crio-b6a889c7ed26fa4ce0405eb20bedbcf311012e210b1549e718bdb4a77f9d5227 WatchSource:0}: Error finding container b6a889c7ed26fa4ce0405eb20bedbcf311012e210b1549e718bdb4a77f9d5227: Status 404 returned error can't find the container with id b6a889c7ed26fa4ce0405eb20bedbcf311012e210b1549e718bdb4a77f9d5227 Feb 16 15:26:18 crc kubenswrapper[4835]: I0216 15:26:18.581128 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302","Type":"ContainerStarted","Data":"b6a889c7ed26fa4ce0405eb20bedbcf311012e210b1549e718bdb4a77f9d5227"} Feb 16 15:26:18 crc kubenswrapper[4835]: I0216 15:26:18.582323 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"00c16d9d-7ff4-4112-a586-11c72b643cd5","Type":"ContainerStarted","Data":"d249a5939e2c642c01f54531f472fc4e1e4d24ecfd3591bdb6f8d6a60564a0c4"} Feb 16 15:26:18 crc kubenswrapper[4835]: I0216 15:26:18.583690 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" event={"ID":"ac798d12-9bfa-4bbd-b013-a91e06a14507","Type":"ContainerStarted","Data":"733e3efed305b7b8046caf1f690ab36fe30e9da8933a0bc43854b01929325acf"} Feb 16 15:26:18 crc kubenswrapper[4835]: I0216 15:26:18.585093 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-658f7fbf5b-sv58x" event={"ID":"6bd7f83f-1998-4efd-96eb-3287d2c721c4","Type":"ContainerStarted","Data":"c8838385f953194be7654695a4e7a775cfb7f55424c262061d6ec5be78b70914"} Feb 16 15:26:18 crc kubenswrapper[4835]: I0216 15:26:18.585282 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:18 crc kubenswrapper[4835]: I0216 15:26:18.585310 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:18 crc kubenswrapper[4835]: I0216 15:26:18.586456 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:26:18 crc kubenswrapper[4835]: I0216 15:26:18.586496 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:26:18 crc kubenswrapper[4835]: I0216 15:26:18.605717 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-658f7fbf5b-sv58x" podStartSLOduration=3.605693778 podStartE2EDuration="3.605693778s" podCreationTimestamp="2026-02-16 15:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:26:18.601835627 +0000 UTC m=+1127.893828522" watchObservedRunningTime="2026-02-16 15:26:18.605693778 +0000 UTC m=+1127.897686673" Feb 16 15:26:18 crc kubenswrapper[4835]: I0216 15:26:18.838061 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:18 crc kubenswrapper[4835]: I0216 15:26:18.956856 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-ovsdbserver-sb\") pod \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " Feb 16 15:26:18 crc kubenswrapper[4835]: I0216 15:26:18.957002 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drwmj\" (UniqueName: \"kubernetes.io/projected/b8674bd5-4a3c-4a53-a41d-227ab877fe63-kube-api-access-drwmj\") pod \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " Feb 16 15:26:18 crc kubenswrapper[4835]: I0216 15:26:18.957052 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-dns-swift-storage-0\") pod \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " Feb 16 15:26:18 crc kubenswrapper[4835]: I0216 15:26:18.957074 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-config\") pod \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " Feb 16 15:26:18 crc kubenswrapper[4835]: I0216 15:26:18.957126 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-dns-svc\") pod \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " Feb 16 15:26:18 crc kubenswrapper[4835]: I0216 15:26:18.957165 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-ovsdbserver-nb\") pod \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\" (UID: \"b8674bd5-4a3c-4a53-a41d-227ab877fe63\") " Feb 16 15:26:18 crc kubenswrapper[4835]: I0216 15:26:18.995116 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8674bd5-4a3c-4a53-a41d-227ab877fe63-kube-api-access-drwmj" (OuterVolumeSpecName: "kube-api-access-drwmj") pod "b8674bd5-4a3c-4a53-a41d-227ab877fe63" (UID: "b8674bd5-4a3c-4a53-a41d-227ab877fe63"). InnerVolumeSpecName "kube-api-access-drwmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.014197 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b8674bd5-4a3c-4a53-a41d-227ab877fe63" (UID: "b8674bd5-4a3c-4a53-a41d-227ab877fe63"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.025455 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b8674bd5-4a3c-4a53-a41d-227ab877fe63" (UID: "b8674bd5-4a3c-4a53-a41d-227ab877fe63"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.031365 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b8674bd5-4a3c-4a53-a41d-227ab877fe63" (UID: "b8674bd5-4a3c-4a53-a41d-227ab877fe63"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.040146 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-config" (OuterVolumeSpecName: "config") pod "b8674bd5-4a3c-4a53-a41d-227ab877fe63" (UID: "b8674bd5-4a3c-4a53-a41d-227ab877fe63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.052660 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b8674bd5-4a3c-4a53-a41d-227ab877fe63" (UID: "b8674bd5-4a3c-4a53-a41d-227ab877fe63"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.058981 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.059011 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drwmj\" (UniqueName: \"kubernetes.io/projected/b8674bd5-4a3c-4a53-a41d-227ab877fe63-kube-api-access-drwmj\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.059020 4835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.059032 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.059041 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.059049 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8674bd5-4a3c-4a53-a41d-227ab877fe63-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.613693 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" event={"ID":"b2e52d59-832d-4a4a-ab60-b288415a7622","Type":"ContainerStarted","Data":"5ed4a6ab27ddfdd0329974d9ce6f1bb242342cd8e1508ea27dee6c7cec620e2e"} Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.618922 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-md5rm" event={"ID":"b8674bd5-4a3c-4a53-a41d-227ab877fe63","Type":"ContainerDied","Data":"dbf80cbd6eb8e2e1fa6bbd80a014a772f3c20f9994e17ef05b970f6c8d719582"} Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.618963 4835 scope.go:117] "RemoveContainer" containerID="c124cbb8b7fd7db3455880437e92c0c8b498d33a8b84f03cf093a9089dcb085b" Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.619060 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-md5rm" Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.625035 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" event={"ID":"6b73d45a-cedb-4986-b66a-89a4aa44c1c5","Type":"ContainerStarted","Data":"35d0200fabd7a39e811fc38af97a1dbb7d8f5980e7024dde501ab21afc66fe55"} Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.636169 4835 generic.go:334] "Generic (PLEG): container finished" podID="ac798d12-9bfa-4bbd-b013-a91e06a14507" containerID="fdde2115359dfdad633ab90d25431e49284a54bd672ee2b4d3722a753d4d8e11" exitCode=0 Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.637226 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" event={"ID":"ac798d12-9bfa-4bbd-b013-a91e06a14507","Type":"ContainerDied","Data":"fdde2115359dfdad633ab90d25431e49284a54bd672ee2b4d3722a753d4d8e11"} Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.643116 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" podStartSLOduration=2.233603997 podStartE2EDuration="4.643089905s" podCreationTimestamp="2026-02-16 15:26:15 +0000 UTC" firstStartedPulling="2026-02-16 15:26:16.36985331 +0000 UTC m=+1125.661846205" lastFinishedPulling="2026-02-16 15:26:18.779339198 +0000 UTC m=+1128.071332113" observedRunningTime="2026-02-16 15:26:19.631579645 +0000 UTC m=+1128.923572550" watchObservedRunningTime="2026-02-16 15:26:19.643089905 +0000 UTC m=+1128.935082800" Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.723713 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-md5rm"] Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.731476 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-md5rm"] Feb 16 15:26:19 crc kubenswrapper[4835]: I0216 15:26:19.732098 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" podStartSLOduration=2.227671112 podStartE2EDuration="4.732080071s" podCreationTimestamp="2026-02-16 15:26:15 +0000 UTC" firstStartedPulling="2026-02-16 15:26:16.273339448 +0000 UTC m=+1125.565332343" lastFinishedPulling="2026-02-16 15:26:18.777748407 +0000 UTC m=+1128.069741302" observedRunningTime="2026-02-16 15:26:19.701739351 +0000 UTC m=+1128.993732246" watchObservedRunningTime="2026-02-16 15:26:19.732080071 +0000 UTC m=+1129.024072966" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.344853 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.458316 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.510663 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-sg-core-conf-yaml\") pod \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.510752 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-config-data\") pod \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.510874 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zslkt\" (UniqueName: \"kubernetes.io/projected/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-kube-api-access-zslkt\") pod \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.510902 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-run-httpd\") pod \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.511516 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-combined-ca-bundle\") pod \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.511570 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-log-httpd\") pod \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.511700 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-scripts\") pod \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.511692 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" (UID: "3c3073d2-9b81-4c5f-96fe-eb303d71bd4c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.512213 4835 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.512832 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" (UID: "3c3073d2-9b81-4c5f-96fe-eb303d71bd4c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.516999 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-scripts" (OuterVolumeSpecName: "scripts") pod "3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" (UID: "3c3073d2-9b81-4c5f-96fe-eb303d71bd4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.519247 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-kube-api-access-zslkt" (OuterVolumeSpecName: "kube-api-access-zslkt") pod "3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" (UID: "3c3073d2-9b81-4c5f-96fe-eb303d71bd4c"). InnerVolumeSpecName "kube-api-access-zslkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.549887 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" (UID: "3c3073d2-9b81-4c5f-96fe-eb303d71bd4c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.613455 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" (UID: "3c3073d2-9b81-4c5f-96fe-eb303d71bd4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.614295 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-combined-ca-bundle\") pod \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\" (UID: \"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c\") " Feb 16 15:26:20 crc kubenswrapper[4835]: W0216 15:26:20.614430 4835 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c/volumes/kubernetes.io~secret/combined-ca-bundle Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.614460 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" (UID: "3c3073d2-9b81-4c5f-96fe-eb303d71bd4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.614995 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zslkt\" (UniqueName: \"kubernetes.io/projected/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-kube-api-access-zslkt\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.615019 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.615028 4835 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.615037 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.615045 4835 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.632840 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-config-data" (OuterVolumeSpecName: "config-data") pod "3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" (UID: "3c3073d2-9b81-4c5f-96fe-eb303d71bd4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.648305 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"00c16d9d-7ff4-4112-a586-11c72b643cd5","Type":"ContainerStarted","Data":"944468c04ff762fe324503254045979a3d56eff0887c6541397ac7ab3142bdcb"} Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.648347 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"00c16d9d-7ff4-4112-a586-11c72b643cd5","Type":"ContainerStarted","Data":"5c11d40ab4e8dd2514bf5c730cf5e4ef407679a126f490d0bf76c2dcc054d8e3"} Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.648465 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="00c16d9d-7ff4-4112-a586-11c72b643cd5" containerName="cinder-api-log" containerID="cri-o://5c11d40ab4e8dd2514bf5c730cf5e4ef407679a126f490d0bf76c2dcc054d8e3" gracePeriod=30 Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.648719 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.648986 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="00c16d9d-7ff4-4112-a586-11c72b643cd5" containerName="cinder-api" containerID="cri-o://944468c04ff762fe324503254045979a3d56eff0887c6541397ac7ab3142bdcb" gracePeriod=30 Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.665323 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c8dc89fcf-gqdlj" event={"ID":"6b73d45a-cedb-4986-b66a-89a4aa44c1c5","Type":"ContainerStarted","Data":"71fca71c3d50c346a6197391eed58e549313efb422c289cf4f8d73a52dd90557"} Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.667905 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" event={"ID":"ac798d12-9bfa-4bbd-b013-a91e06a14507","Type":"ContainerStarted","Data":"ee536bffe31678bd43ee3c1f890e746fa19bb4cfd94003766757c2afda2ef8fb"} Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.671790 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.673550 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.6734935699999998 podStartE2EDuration="3.67349357s" podCreationTimestamp="2026-02-16 15:26:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:26:20.665079071 +0000 UTC m=+1129.957071966" watchObservedRunningTime="2026-02-16 15:26:20.67349357 +0000 UTC m=+1129.965486465" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.681856 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7764659d9b-m4t6j" event={"ID":"b2e52d59-832d-4a4a-ab60-b288415a7622","Type":"ContainerStarted","Data":"e30be29594c8c21b5b3c74ed8758a72a2d77ec3758f976d1f72c9cfefe3b0a9e"} Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.697981 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302","Type":"ContainerStarted","Data":"5436b9fa5897cad3d5ee6062bb5c4b4d30dcca12a35ff6d67f35e2c3b301f718"} Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.702605 4835 generic.go:334] "Generic (PLEG): container finished" podID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerID="9318fe45fa94086436b9c5962775be17396029603e277fad7f493f3d8f65ca84" exitCode=0 Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.702652 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c","Type":"ContainerDied","Data":"9318fe45fa94086436b9c5962775be17396029603e277fad7f493f3d8f65ca84"} Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.702691 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3c3073d2-9b81-4c5f-96fe-eb303d71bd4c","Type":"ContainerDied","Data":"adf3ca7ef3b9f1fb779418fbc983c11c5e16217338acb7364d1ed7271a14476e"} Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.702709 4835 scope.go:117] "RemoveContainer" containerID="9a08096f39f4abe3ce4c3e8e4411ab6e7fba316f41dc9aca7556ec0147764a11" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.702869 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" podStartSLOduration=4.702850834 podStartE2EDuration="4.702850834s" podCreationTimestamp="2026-02-16 15:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:26:20.688254514 +0000 UTC m=+1129.980247409" watchObservedRunningTime="2026-02-16 15:26:20.702850834 +0000 UTC m=+1129.994843729" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.702907 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.718061 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.744414 4835 scope.go:117] "RemoveContainer" containerID="d9f014429de047bddcb35b2921947f6cc45d46fbd995b5a03d4c39b44c55ae43" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.787451 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.805785 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.820908 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:26:20 crc kubenswrapper[4835]: E0216 15:26:20.821332 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerName="ceilometer-central-agent" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.821349 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerName="ceilometer-central-agent" Feb 16 15:26:20 crc kubenswrapper[4835]: E0216 15:26:20.821358 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerName="proxy-httpd" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.821366 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerName="proxy-httpd" Feb 16 15:26:20 crc kubenswrapper[4835]: E0216 15:26:20.821383 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerName="sg-core" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.821389 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerName="sg-core" Feb 16 15:26:20 crc kubenswrapper[4835]: E0216 15:26:20.821419 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerName="ceilometer-notification-agent" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.821425 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerName="ceilometer-notification-agent" Feb 16 15:26:20 crc kubenswrapper[4835]: E0216 15:26:20.821439 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8674bd5-4a3c-4a53-a41d-227ab877fe63" containerName="init" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.821445 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8674bd5-4a3c-4a53-a41d-227ab877fe63" containerName="init" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.821636 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerName="ceilometer-notification-agent" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.821659 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerName="ceilometer-central-agent" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.821666 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8674bd5-4a3c-4a53-a41d-227ab877fe63" containerName="init" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.821672 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerName="proxy-httpd" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.821682 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" containerName="sg-core" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.823432 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.832700 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.837933 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.838062 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.862402 4835 scope.go:117] "RemoveContainer" containerID="9318fe45fa94086436b9c5962775be17396029603e277fad7f493f3d8f65ca84" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.914684 4835 scope.go:117] "RemoveContainer" containerID="cf2554531880261847bbbeb289e4e0c19db7f8da63dc773c2e5dc287f04343c9" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.926901 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-config-data\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.926994 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50108a39-fee9-46bc-a8f0-6c250e5fb27e-log-httpd\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.927040 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-scripts\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.927086 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbwf6\" (UniqueName: \"kubernetes.io/projected/50108a39-fee9-46bc-a8f0-6c250e5fb27e-kube-api-access-fbwf6\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.927105 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.927173 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.929268 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50108a39-fee9-46bc-a8f0-6c250e5fb27e-run-httpd\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.999079 4835 scope.go:117] "RemoveContainer" containerID="9a08096f39f4abe3ce4c3e8e4411ab6e7fba316f41dc9aca7556ec0147764a11" Feb 16 15:26:20 crc kubenswrapper[4835]: E0216 15:26:20.999496 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a08096f39f4abe3ce4c3e8e4411ab6e7fba316f41dc9aca7556ec0147764a11\": container with ID starting with 9a08096f39f4abe3ce4c3e8e4411ab6e7fba316f41dc9aca7556ec0147764a11 not found: ID does not exist" containerID="9a08096f39f4abe3ce4c3e8e4411ab6e7fba316f41dc9aca7556ec0147764a11" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.999566 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a08096f39f4abe3ce4c3e8e4411ab6e7fba316f41dc9aca7556ec0147764a11"} err="failed to get container status \"9a08096f39f4abe3ce4c3e8e4411ab6e7fba316f41dc9aca7556ec0147764a11\": rpc error: code = NotFound desc = could not find container \"9a08096f39f4abe3ce4c3e8e4411ab6e7fba316f41dc9aca7556ec0147764a11\": container with ID starting with 9a08096f39f4abe3ce4c3e8e4411ab6e7fba316f41dc9aca7556ec0147764a11 not found: ID does not exist" Feb 16 15:26:20 crc kubenswrapper[4835]: I0216 15:26:20.999593 4835 scope.go:117] "RemoveContainer" containerID="d9f014429de047bddcb35b2921947f6cc45d46fbd995b5a03d4c39b44c55ae43" Feb 16 15:26:20 crc kubenswrapper[4835]: E0216 15:26:20.999914 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9f014429de047bddcb35b2921947f6cc45d46fbd995b5a03d4c39b44c55ae43\": container with ID starting with d9f014429de047bddcb35b2921947f6cc45d46fbd995b5a03d4c39b44c55ae43 not found: ID does not exist" containerID="d9f014429de047bddcb35b2921947f6cc45d46fbd995b5a03d4c39b44c55ae43" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:20.999973 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9f014429de047bddcb35b2921947f6cc45d46fbd995b5a03d4c39b44c55ae43"} err="failed to get container status \"d9f014429de047bddcb35b2921947f6cc45d46fbd995b5a03d4c39b44c55ae43\": rpc error: code = NotFound desc = could not find container \"d9f014429de047bddcb35b2921947f6cc45d46fbd995b5a03d4c39b44c55ae43\": container with ID starting with d9f014429de047bddcb35b2921947f6cc45d46fbd995b5a03d4c39b44c55ae43 not found: ID does not exist" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.000018 4835 scope.go:117] "RemoveContainer" containerID="9318fe45fa94086436b9c5962775be17396029603e277fad7f493f3d8f65ca84" Feb 16 15:26:21 crc kubenswrapper[4835]: E0216 15:26:21.000414 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9318fe45fa94086436b9c5962775be17396029603e277fad7f493f3d8f65ca84\": container with ID starting with 9318fe45fa94086436b9c5962775be17396029603e277fad7f493f3d8f65ca84 not found: ID does not exist" containerID="9318fe45fa94086436b9c5962775be17396029603e277fad7f493f3d8f65ca84" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.000435 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9318fe45fa94086436b9c5962775be17396029603e277fad7f493f3d8f65ca84"} err="failed to get container status \"9318fe45fa94086436b9c5962775be17396029603e277fad7f493f3d8f65ca84\": rpc error: code = NotFound desc = could not find container \"9318fe45fa94086436b9c5962775be17396029603e277fad7f493f3d8f65ca84\": container with ID starting with 9318fe45fa94086436b9c5962775be17396029603e277fad7f493f3d8f65ca84 not found: ID does not exist" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.000449 4835 scope.go:117] "RemoveContainer" containerID="cf2554531880261847bbbeb289e4e0c19db7f8da63dc773c2e5dc287f04343c9" Feb 16 15:26:21 crc kubenswrapper[4835]: E0216 15:26:21.000904 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf2554531880261847bbbeb289e4e0c19db7f8da63dc773c2e5dc287f04343c9\": container with ID starting with cf2554531880261847bbbeb289e4e0c19db7f8da63dc773c2e5dc287f04343c9 not found: ID does not exist" containerID="cf2554531880261847bbbeb289e4e0c19db7f8da63dc773c2e5dc287f04343c9" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.000924 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf2554531880261847bbbeb289e4e0c19db7f8da63dc773c2e5dc287f04343c9"} err="failed to get container status \"cf2554531880261847bbbeb289e4e0c19db7f8da63dc773c2e5dc287f04343c9\": rpc error: code = NotFound desc = could not find container \"cf2554531880261847bbbeb289e4e0c19db7f8da63dc773c2e5dc287f04343c9\": container with ID starting with cf2554531880261847bbbeb289e4e0c19db7f8da63dc773c2e5dc287f04343c9 not found: ID does not exist" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.030558 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-config-data\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.030644 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50108a39-fee9-46bc-a8f0-6c250e5fb27e-log-httpd\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.030670 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-scripts\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.030707 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbwf6\" (UniqueName: \"kubernetes.io/projected/50108a39-fee9-46bc-a8f0-6c250e5fb27e-kube-api-access-fbwf6\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.030731 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.030790 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.030883 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50108a39-fee9-46bc-a8f0-6c250e5fb27e-run-httpd\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.031303 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50108a39-fee9-46bc-a8f0-6c250e5fb27e-log-httpd\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.031661 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50108a39-fee9-46bc-a8f0-6c250e5fb27e-run-httpd\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.043114 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-scripts\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.043886 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.050244 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.053104 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-config-data\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.053193 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbwf6\" (UniqueName: \"kubernetes.io/projected/50108a39-fee9-46bc-a8f0-6c250e5fb27e-kube-api-access-fbwf6\") pod \"ceilometer-0\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " pod="openstack/ceilometer-0" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.207560 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.469756 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c3073d2-9b81-4c5f-96fe-eb303d71bd4c" path="/var/lib/kubelet/pods/3c3073d2-9b81-4c5f-96fe-eb303d71bd4c/volumes" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.470771 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8674bd5-4a3c-4a53-a41d-227ab877fe63" path="/var/lib/kubelet/pods/b8674bd5-4a3c-4a53-a41d-227ab877fe63/volumes" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.716377 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302","Type":"ContainerStarted","Data":"0f7f011f1bcc95eb50e68bb5bcc54abce374e5ff148c2e0f31b23095ab9142d1"} Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.722809 4835 generic.go:334] "Generic (PLEG): container finished" podID="00c16d9d-7ff4-4112-a586-11c72b643cd5" containerID="5c11d40ab4e8dd2514bf5c730cf5e4ef407679a126f490d0bf76c2dcc054d8e3" exitCode=143 Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.722936 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"00c16d9d-7ff4-4112-a586-11c72b643cd5","Type":"ContainerDied","Data":"5c11d40ab4e8dd2514bf5c730cf5e4ef407679a126f490d0bf76c2dcc054d8e3"} Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.737110 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.756500961 podStartE2EDuration="5.7370931s" podCreationTimestamp="2026-02-16 15:26:16 +0000 UTC" firstStartedPulling="2026-02-16 15:26:18.34766172 +0000 UTC m=+1127.639654615" lastFinishedPulling="2026-02-16 15:26:19.328253859 +0000 UTC m=+1128.620246754" observedRunningTime="2026-02-16 15:26:21.733082965 +0000 UTC m=+1131.025075860" watchObservedRunningTime="2026-02-16 15:26:21.7370931 +0000 UTC m=+1131.029085995" Feb 16 15:26:21 crc kubenswrapper[4835]: I0216 15:26:21.873298 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.137730 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.395050 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-555c89cd64-w6qv5"] Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.396876 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.405716 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.405869 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.416326 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-555c89cd64-w6qv5"] Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.562217 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zshxp\" (UniqueName: \"kubernetes.io/projected/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-kube-api-access-zshxp\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.562511 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-config-data\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.562798 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-public-tls-certs\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.562883 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-config-data-custom\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.563095 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-combined-ca-bundle\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.563179 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-logs\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.563203 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-internal-tls-certs\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.664902 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zshxp\" (UniqueName: \"kubernetes.io/projected/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-kube-api-access-zshxp\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.664968 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-config-data\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.665004 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-public-tls-certs\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.665030 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-config-data-custom\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.665057 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-combined-ca-bundle\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.665094 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-logs\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.665112 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-internal-tls-certs\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.666509 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-logs\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.670795 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-combined-ca-bundle\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.672020 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-config-data\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.674957 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-internal-tls-certs\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.675521 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-config-data-custom\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.692037 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-public-tls-certs\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.696036 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zshxp\" (UniqueName: \"kubernetes.io/projected/45d4ee34-3b8b-407a-a8f9-e31b32377c0c-kube-api-access-zshxp\") pod \"barbican-api-555c89cd64-w6qv5\" (UID: \"45d4ee34-3b8b-407a-a8f9-e31b32377c0c\") " pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.759877 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50108a39-fee9-46bc-a8f0-6c250e5fb27e","Type":"ContainerStarted","Data":"6dab0c40c3a8d601eaa4ffe2fbd6d36cf6fb992d80d2cc4f5918b43d5a722003"} Feb 16 15:26:22 crc kubenswrapper[4835]: I0216 15:26:22.770153 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:23 crc kubenswrapper[4835]: I0216 15:26:23.396601 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-555c89cd64-w6qv5"] Feb 16 15:26:23 crc kubenswrapper[4835]: I0216 15:26:23.770103 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50108a39-fee9-46bc-a8f0-6c250e5fb27e","Type":"ContainerStarted","Data":"f3f8e09d5e948b06479284ded54feb7fa3a267eea80d69de2872015b0f731cac"} Feb 16 15:26:23 crc kubenswrapper[4835]: I0216 15:26:23.771676 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-555c89cd64-w6qv5" event={"ID":"45d4ee34-3b8b-407a-a8f9-e31b32377c0c","Type":"ContainerStarted","Data":"fe5feafcec2316b931bb79576a86ee4b3cef5cf3e99e4c9bb9ceb6ef100d29b6"} Feb 16 15:26:23 crc kubenswrapper[4835]: I0216 15:26:23.771720 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-555c89cd64-w6qv5" event={"ID":"45d4ee34-3b8b-407a-a8f9-e31b32377c0c","Type":"ContainerStarted","Data":"9531b9667fb57dcea79736bdacd6538ee70b6d2525bf5d80a0455ba47a652472"} Feb 16 15:26:24 crc kubenswrapper[4835]: I0216 15:26:24.780380 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50108a39-fee9-46bc-a8f0-6c250e5fb27e","Type":"ContainerStarted","Data":"9e2b082874c49185f54f8d2208da66f11fd430f24757f32be3226493c6db692a"} Feb 16 15:26:24 crc kubenswrapper[4835]: I0216 15:26:24.782077 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-555c89cd64-w6qv5" event={"ID":"45d4ee34-3b8b-407a-a8f9-e31b32377c0c","Type":"ContainerStarted","Data":"ef20249afc42ffc88efc875748144ae25c336980c4b32b0095236a976b626d17"} Feb 16 15:26:24 crc kubenswrapper[4835]: I0216 15:26:24.782329 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:24 crc kubenswrapper[4835]: I0216 15:26:24.808245 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-555c89cd64-w6qv5" podStartSLOduration=2.808224283 podStartE2EDuration="2.808224283s" podCreationTimestamp="2026-02-16 15:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:26:24.801269612 +0000 UTC m=+1134.093262547" watchObservedRunningTime="2026-02-16 15:26:24.808224283 +0000 UTC m=+1134.100217178" Feb 16 15:26:25 crc kubenswrapper[4835]: I0216 15:26:25.797968 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50108a39-fee9-46bc-a8f0-6c250e5fb27e","Type":"ContainerStarted","Data":"e431095ba5f0364f8b7718602fa1fd38bb6cfc13e8b652a30aabc4e602cf8ca4"} Feb 16 15:26:25 crc kubenswrapper[4835]: I0216 15:26:25.798394 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:27 crc kubenswrapper[4835]: I0216 15:26:27.282811 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:26:27 crc kubenswrapper[4835]: I0216 15:26:27.374036 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nzvcg"] Feb 16 15:26:27 crc kubenswrapper[4835]: I0216 15:26:27.374575 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" podUID="48036321-6092-4b5b-9467-af594e089508" containerName="dnsmasq-dns" containerID="cri-o://b24a289dea697393820c2aae69c24c03c0ad7d791fea5dd3069c64f04ccc49ce" gracePeriod=10 Feb 16 15:26:27 crc kubenswrapper[4835]: I0216 15:26:27.488761 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:26:27 crc kubenswrapper[4835]: I0216 15:26:27.490259 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 15:26:27 crc kubenswrapper[4835]: I0216 15:26:27.576821 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:26:27 crc kubenswrapper[4835]: I0216 15:26:27.737250 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6796c594c9-9kk2v"] Feb 16 15:26:27 crc kubenswrapper[4835]: I0216 15:26:27.737490 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6796c594c9-9kk2v" podUID="7a0d8af0-945b-4b99-881f-06f183195461" containerName="neutron-api" containerID="cri-o://56fe9445a95d69eaacf7bee6726687a08f5947747ac2d1f640b91d8e4735dd00" gracePeriod=30 Feb 16 15:26:27 crc kubenswrapper[4835]: I0216 15:26:27.737557 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6796c594c9-9kk2v" podUID="7a0d8af0-945b-4b99-881f-06f183195461" containerName="neutron-httpd" containerID="cri-o://6299b288c705a8c8ec20ace903a9572ff64be7b0b9a433c0a95549b83586da8d" gracePeriod=30 Feb 16 15:26:27 crc kubenswrapper[4835]: I0216 15:26:27.813899 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77567867dc-2fttf"] Feb 16 15:26:27 crc kubenswrapper[4835]: I0216 15:26:27.815943 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77567867dc-2fttf"] Feb 16 15:26:27 crc kubenswrapper[4835]: I0216 15:26:27.815996 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:27 crc kubenswrapper[4835]: I0216 15:26:27.817294 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:27 crc kubenswrapper[4835]: I0216 15:26:27.941464 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6796c594c9-9kk2v" podUID="7a0d8af0-945b-4b99-881f-06f183195461" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.172:9696/\": read tcp 10.217.0.2:40488->10.217.0.172:9696: read: connection reset by peer" Feb 16 15:26:27 crc kubenswrapper[4835]: I0216 15:26:27.978922 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50108a39-fee9-46bc-a8f0-6c250e5fb27e","Type":"ContainerStarted","Data":"a687e0f9b23ec6f5225aaece7449a2b155b00a467b65b6bdcd8f8fb2b736e916"} Feb 16 15:26:27 crc kubenswrapper[4835]: I0216 15:26:27.979726 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.006169 4835 generic.go:334] "Generic (PLEG): container finished" podID="48036321-6092-4b5b-9467-af594e089508" containerID="b24a289dea697393820c2aae69c24c03c0ad7d791fea5dd3069c64f04ccc49ce" exitCode=0 Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.006714 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ab3be26e-b15a-45b0-a4e4-dd3f21ca1302" containerName="cinder-scheduler" containerID="cri-o://5436b9fa5897cad3d5ee6062bb5c4b4d30dcca12a35ff6d67f35e2c3b301f718" gracePeriod=30 Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.007070 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" event={"ID":"48036321-6092-4b5b-9467-af594e089508","Type":"ContainerDied","Data":"b24a289dea697393820c2aae69c24c03c0ad7d791fea5dd3069c64f04ccc49ce"} Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.007130 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ab3be26e-b15a-45b0-a4e4-dd3f21ca1302" containerName="probe" containerID="cri-o://0f7f011f1bcc95eb50e68bb5bcc54abce374e5ff148c2e0f31b23095ab9142d1" gracePeriod=30 Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.023794 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-config\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.023856 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-public-tls-certs\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.023880 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xbh8\" (UniqueName: \"kubernetes.io/projected/3f3ac245-33a3-4481-8139-1d26969a6a94-kube-api-access-8xbh8\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.023968 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-combined-ca-bundle\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.023994 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-ovndb-tls-certs\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.024012 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-httpd-config\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.024049 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-internal-tls-certs\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.084241 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.205572249 podStartE2EDuration="8.084217078s" podCreationTimestamp="2026-02-16 15:26:20 +0000 UTC" firstStartedPulling="2026-02-16 15:26:21.885136494 +0000 UTC m=+1131.177129389" lastFinishedPulling="2026-02-16 15:26:26.763781313 +0000 UTC m=+1136.055774218" observedRunningTime="2026-02-16 15:26:28.045628554 +0000 UTC m=+1137.337621449" watchObservedRunningTime="2026-02-16 15:26:28.084217078 +0000 UTC m=+1137.376209973" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.130247 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-internal-tls-certs\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.130333 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-config\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.130366 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-public-tls-certs\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.130387 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xbh8\" (UniqueName: \"kubernetes.io/projected/3f3ac245-33a3-4481-8139-1d26969a6a94-kube-api-access-8xbh8\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.130577 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-combined-ca-bundle\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.130640 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-ovndb-tls-certs\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.130658 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-httpd-config\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.139636 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-internal-tls-certs\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.141287 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-httpd-config\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.145016 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-combined-ca-bundle\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.145774 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-public-tls-certs\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.159028 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-ovndb-tls-certs\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.168989 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xbh8\" (UniqueName: \"kubernetes.io/projected/3f3ac245-33a3-4481-8139-1d26969a6a94-kube-api-access-8xbh8\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.181342 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3f3ac245-33a3-4481-8139-1d26969a6a94-config\") pod \"neutron-77567867dc-2fttf\" (UID: \"3f3ac245-33a3-4481-8139-1d26969a6a94\") " pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.272022 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:28 crc kubenswrapper[4835]: E0216 15:26:28.325925 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a0d8af0_945b_4b99_881f_06f183195461.slice/crio-conmon-6299b288c705a8c8ec20ace903a9572ff64be7b0b9a433c0a95549b83586da8d.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.392598 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.442783 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-ovsdbserver-sb\") pod \"48036321-6092-4b5b-9467-af594e089508\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.442826 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-ovsdbserver-nb\") pod \"48036321-6092-4b5b-9467-af594e089508\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.442879 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-dns-swift-storage-0\") pod \"48036321-6092-4b5b-9467-af594e089508\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.442960 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-dns-svc\") pod \"48036321-6092-4b5b-9467-af594e089508\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.442981 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s5cs\" (UniqueName: \"kubernetes.io/projected/48036321-6092-4b5b-9467-af594e089508-kube-api-access-4s5cs\") pod \"48036321-6092-4b5b-9467-af594e089508\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.443015 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-config\") pod \"48036321-6092-4b5b-9467-af594e089508\" (UID: \"48036321-6092-4b5b-9467-af594e089508\") " Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.471898 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48036321-6092-4b5b-9467-af594e089508-kube-api-access-4s5cs" (OuterVolumeSpecName: "kube-api-access-4s5cs") pod "48036321-6092-4b5b-9467-af594e089508" (UID: "48036321-6092-4b5b-9467-af594e089508"). InnerVolumeSpecName "kube-api-access-4s5cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.527460 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "48036321-6092-4b5b-9467-af594e089508" (UID: "48036321-6092-4b5b-9467-af594e089508"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.538056 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "48036321-6092-4b5b-9467-af594e089508" (UID: "48036321-6092-4b5b-9467-af594e089508"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.548011 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s5cs\" (UniqueName: \"kubernetes.io/projected/48036321-6092-4b5b-9467-af594e089508-kube-api-access-4s5cs\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.548041 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.548050 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.566222 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "48036321-6092-4b5b-9467-af594e089508" (UID: "48036321-6092-4b5b-9467-af594e089508"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.567039 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-config" (OuterVolumeSpecName: "config") pod "48036321-6092-4b5b-9467-af594e089508" (UID: "48036321-6092-4b5b-9467-af594e089508"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.603207 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "48036321-6092-4b5b-9467-af594e089508" (UID: "48036321-6092-4b5b-9467-af594e089508"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.649559 4835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.649602 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.649612 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48036321-6092-4b5b-9467-af594e089508-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:28 crc kubenswrapper[4835]: I0216 15:26:28.762070 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:29 crc kubenswrapper[4835]: I0216 15:26:29.038312 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" event={"ID":"48036321-6092-4b5b-9467-af594e089508","Type":"ContainerDied","Data":"62421c5f23e5282a07c40d8809cb2e19fa013dd425f0aca8da00d920e490cfc8"} Feb 16 15:26:29 crc kubenswrapper[4835]: I0216 15:26:29.038358 4835 scope.go:117] "RemoveContainer" containerID="b24a289dea697393820c2aae69c24c03c0ad7d791fea5dd3069c64f04ccc49ce" Feb 16 15:26:29 crc kubenswrapper[4835]: I0216 15:26:29.038459 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-nzvcg" Feb 16 15:26:29 crc kubenswrapper[4835]: I0216 15:26:29.052664 4835 generic.go:334] "Generic (PLEG): container finished" podID="7a0d8af0-945b-4b99-881f-06f183195461" containerID="6299b288c705a8c8ec20ace903a9572ff64be7b0b9a433c0a95549b83586da8d" exitCode=0 Feb 16 15:26:29 crc kubenswrapper[4835]: I0216 15:26:29.053717 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6796c594c9-9kk2v" event={"ID":"7a0d8af0-945b-4b99-881f-06f183195461","Type":"ContainerDied","Data":"6299b288c705a8c8ec20ace903a9572ff64be7b0b9a433c0a95549b83586da8d"} Feb 16 15:26:29 crc kubenswrapper[4835]: I0216 15:26:29.082845 4835 scope.go:117] "RemoveContainer" containerID="4e526dec782cd183a12ce2f5b7863274677827d3cbe4d522dbe2f18ffaceb5da" Feb 16 15:26:29 crc kubenswrapper[4835]: I0216 15:26:29.083659 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77567867dc-2fttf"] Feb 16 15:26:29 crc kubenswrapper[4835]: I0216 15:26:29.122802 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nzvcg"] Feb 16 15:26:29 crc kubenswrapper[4835]: I0216 15:26:29.140277 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nzvcg"] Feb 16 15:26:29 crc kubenswrapper[4835]: E0216 15:26:29.387397 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:26:29 crc kubenswrapper[4835]: I0216 15:26:29.394293 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48036321-6092-4b5b-9467-af594e089508" path="/var/lib/kubelet/pods/48036321-6092-4b5b-9467-af594e089508/volumes" Feb 16 15:26:29 crc kubenswrapper[4835]: I0216 15:26:29.609571 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6796c594c9-9kk2v" podUID="7a0d8af0-945b-4b99-881f-06f183195461" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.172:9696/\": dial tcp 10.217.0.172:9696: connect: connection refused" Feb 16 15:26:30 crc kubenswrapper[4835]: I0216 15:26:30.064478 4835 generic.go:334] "Generic (PLEG): container finished" podID="ab3be26e-b15a-45b0-a4e4-dd3f21ca1302" containerID="0f7f011f1bcc95eb50e68bb5bcc54abce374e5ff148c2e0f31b23095ab9142d1" exitCode=0 Feb 16 15:26:30 crc kubenswrapper[4835]: I0216 15:26:30.064570 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302","Type":"ContainerDied","Data":"0f7f011f1bcc95eb50e68bb5bcc54abce374e5ff148c2e0f31b23095ab9142d1"} Feb 16 15:26:30 crc kubenswrapper[4835]: I0216 15:26:30.067729 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77567867dc-2fttf" event={"ID":"3f3ac245-33a3-4481-8139-1d26969a6a94","Type":"ContainerStarted","Data":"7c88641a76d9bec0b85c6dd27a9c905e1a63caff85f3416314b5bb94f9c5458d"} Feb 16 15:26:30 crc kubenswrapper[4835]: I0216 15:26:30.067762 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77567867dc-2fttf" event={"ID":"3f3ac245-33a3-4481-8139-1d26969a6a94","Type":"ContainerStarted","Data":"f2a4c22e89dd1ac49b89ffb1d7f3b80b27648eef1ebaa07cb7ce220bbae2920c"} Feb 16 15:26:30 crc kubenswrapper[4835]: I0216 15:26:30.067794 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77567867dc-2fttf" event={"ID":"3f3ac245-33a3-4481-8139-1d26969a6a94","Type":"ContainerStarted","Data":"6bce67d64364179e52914b53bd89a23416e812ea401b0a0fbabef7f3835bc7e8"} Feb 16 15:26:30 crc kubenswrapper[4835]: I0216 15:26:30.067906 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:30 crc kubenswrapper[4835]: I0216 15:26:30.092990 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77567867dc-2fttf" podStartSLOduration=3.092951293 podStartE2EDuration="3.092951293s" podCreationTimestamp="2026-02-16 15:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:26:30.082436099 +0000 UTC m=+1139.374428994" watchObservedRunningTime="2026-02-16 15:26:30.092951293 +0000 UTC m=+1139.384944188" Feb 16 15:26:30 crc kubenswrapper[4835]: I0216 15:26:30.551276 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.077104 4835 generic.go:334] "Generic (PLEG): container finished" podID="7a0d8af0-945b-4b99-881f-06f183195461" containerID="56fe9445a95d69eaacf7bee6726687a08f5947747ac2d1f640b91d8e4735dd00" exitCode=0 Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.077181 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6796c594c9-9kk2v" event={"ID":"7a0d8af0-945b-4b99-881f-06f183195461","Type":"ContainerDied","Data":"56fe9445a95d69eaacf7bee6726687a08f5947747ac2d1f640b91d8e4735dd00"} Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.625586 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.814915 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-public-tls-certs\") pod \"7a0d8af0-945b-4b99-881f-06f183195461\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.815498 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-combined-ca-bundle\") pod \"7a0d8af0-945b-4b99-881f-06f183195461\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.815550 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-ovndb-tls-certs\") pod \"7a0d8af0-945b-4b99-881f-06f183195461\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.815591 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-httpd-config\") pod \"7a0d8af0-945b-4b99-881f-06f183195461\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.815649 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-internal-tls-certs\") pod \"7a0d8af0-945b-4b99-881f-06f183195461\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.815669 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-config\") pod \"7a0d8af0-945b-4b99-881f-06f183195461\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.815711 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96hr9\" (UniqueName: \"kubernetes.io/projected/7a0d8af0-945b-4b99-881f-06f183195461-kube-api-access-96hr9\") pod \"7a0d8af0-945b-4b99-881f-06f183195461\" (UID: \"7a0d8af0-945b-4b99-881f-06f183195461\") " Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.823776 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a0d8af0-945b-4b99-881f-06f183195461-kube-api-access-96hr9" (OuterVolumeSpecName: "kube-api-access-96hr9") pod "7a0d8af0-945b-4b99-881f-06f183195461" (UID: "7a0d8af0-945b-4b99-881f-06f183195461"). InnerVolumeSpecName "kube-api-access-96hr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.835717 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7a0d8af0-945b-4b99-881f-06f183195461" (UID: "7a0d8af0-945b-4b99-881f-06f183195461"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.904776 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a0d8af0-945b-4b99-881f-06f183195461" (UID: "7a0d8af0-945b-4b99-881f-06f183195461"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.908328 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7a0d8af0-945b-4b99-881f-06f183195461" (UID: "7a0d8af0-945b-4b99-881f-06f183195461"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.908382 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7a0d8af0-945b-4b99-881f-06f183195461" (UID: "7a0d8af0-945b-4b99-881f-06f183195461"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.917333 4835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.917360 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.917369 4835 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.917378 4835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.917386 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96hr9\" (UniqueName: \"kubernetes.io/projected/7a0d8af0-945b-4b99-881f-06f183195461-kube-api-access-96hr9\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.939649 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7a0d8af0-945b-4b99-881f-06f183195461" (UID: "7a0d8af0-945b-4b99-881f-06f183195461"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:31 crc kubenswrapper[4835]: I0216 15:26:31.975733 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-config" (OuterVolumeSpecName: "config") pod "7a0d8af0-945b-4b99-881f-06f183195461" (UID: "7a0d8af0-945b-4b99-881f-06f183195461"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.013585 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.018841 4835 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.018875 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7a0d8af0-945b-4b99-881f-06f183195461-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.050238 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.103566 4835 generic.go:334] "Generic (PLEG): container finished" podID="ab3be26e-b15a-45b0-a4e4-dd3f21ca1302" containerID="5436b9fa5897cad3d5ee6062bb5c4b4d30dcca12a35ff6d67f35e2c3b301f718" exitCode=0 Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.103774 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302","Type":"ContainerDied","Data":"5436b9fa5897cad3d5ee6062bb5c4b4d30dcca12a35ff6d67f35e2c3b301f718"} Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.125406 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6796c594c9-9kk2v" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.125453 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6796c594c9-9kk2v" event={"ID":"7a0d8af0-945b-4b99-881f-06f183195461","Type":"ContainerDied","Data":"13a0842d6754f789c1e890d807b4eba2cebea15ec8ac762b9805a7aaec2f4ea2"} Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.125496 4835 scope.go:117] "RemoveContainer" containerID="6299b288c705a8c8ec20ace903a9572ff64be7b0b9a433c0a95549b83586da8d" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.169152 4835 scope.go:117] "RemoveContainer" containerID="56fe9445a95d69eaacf7bee6726687a08f5947747ac2d1f640b91d8e4735dd00" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.175259 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6796c594c9-9kk2v"] Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.182519 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6796c594c9-9kk2v"] Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.274887 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db8d74b8d-b8dp6"] Feb 16 15:26:32 crc kubenswrapper[4835]: E0216 15:26:32.275292 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48036321-6092-4b5b-9467-af594e089508" containerName="init" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.275310 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="48036321-6092-4b5b-9467-af594e089508" containerName="init" Feb 16 15:26:32 crc kubenswrapper[4835]: E0216 15:26:32.275330 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0d8af0-945b-4b99-881f-06f183195461" containerName="neutron-httpd" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.275337 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0d8af0-945b-4b99-881f-06f183195461" containerName="neutron-httpd" Feb 16 15:26:32 crc kubenswrapper[4835]: E0216 15:26:32.275347 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0d8af0-945b-4b99-881f-06f183195461" containerName="neutron-api" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.275353 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0d8af0-945b-4b99-881f-06f183195461" containerName="neutron-api" Feb 16 15:26:32 crc kubenswrapper[4835]: E0216 15:26:32.275373 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48036321-6092-4b5b-9467-af594e089508" containerName="dnsmasq-dns" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.275384 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="48036321-6092-4b5b-9467-af594e089508" containerName="dnsmasq-dns" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.275607 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0d8af0-945b-4b99-881f-06f183195461" containerName="neutron-httpd" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.275641 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="48036321-6092-4b5b-9467-af594e089508" containerName="dnsmasq-dns" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.275654 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0d8af0-945b-4b99-881f-06f183195461" containerName="neutron-api" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.279174 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.284171 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.300964 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db8d74b8d-b8dp6"] Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.428265 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-config-data-custom\") pod \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.428398 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lpkh\" (UniqueName: \"kubernetes.io/projected/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-kube-api-access-6lpkh\") pod \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.428768 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-etc-machine-id\") pod \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.428818 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-scripts\") pod \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.428964 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-combined-ca-bundle\") pod \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.428996 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-config-data\") pod \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\" (UID: \"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302\") " Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.429314 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-public-tls-certs\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.429348 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-combined-ca-bundle\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.429372 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-logs\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.429406 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-config-data\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.429452 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86nl6\" (UniqueName: \"kubernetes.io/projected/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-kube-api-access-86nl6\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.430389 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-scripts\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.430654 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-internal-tls-certs\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.431285 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ab3be26e-b15a-45b0-a4e4-dd3f21ca1302" (UID: "ab3be26e-b15a-45b0-a4e4-dd3f21ca1302"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.434959 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ab3be26e-b15a-45b0-a4e4-dd3f21ca1302" (UID: "ab3be26e-b15a-45b0-a4e4-dd3f21ca1302"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.447471 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-scripts" (OuterVolumeSpecName: "scripts") pod "ab3be26e-b15a-45b0-a4e4-dd3f21ca1302" (UID: "ab3be26e-b15a-45b0-a4e4-dd3f21ca1302"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.453697 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-kube-api-access-6lpkh" (OuterVolumeSpecName: "kube-api-access-6lpkh") pod "ab3be26e-b15a-45b0-a4e4-dd3f21ca1302" (UID: "ab3be26e-b15a-45b0-a4e4-dd3f21ca1302"). InnerVolumeSpecName "kube-api-access-6lpkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.507730 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab3be26e-b15a-45b0-a4e4-dd3f21ca1302" (UID: "ab3be26e-b15a-45b0-a4e4-dd3f21ca1302"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.531961 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-logs\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.532043 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-config-data\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.532082 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86nl6\" (UniqueName: \"kubernetes.io/projected/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-kube-api-access-86nl6\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.532136 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-scripts\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.532205 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-internal-tls-certs\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.532281 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-public-tls-certs\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.532302 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-combined-ca-bundle\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.532364 4835 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.532359 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-logs\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.532375 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.532430 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.532445 4835 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.532456 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lpkh\" (UniqueName: \"kubernetes.io/projected/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-kube-api-access-6lpkh\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.537909 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-public-tls-certs\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.537935 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-internal-tls-certs\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.538289 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-combined-ca-bundle\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.540789 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-scripts\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.559843 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86nl6\" (UniqueName: \"kubernetes.io/projected/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-kube-api-access-86nl6\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.560451 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad10e2a4-7521-47a2-bf1f-d1e4b83b1136-config-data\") pod \"placement-db8d74b8d-b8dp6\" (UID: \"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136\") " pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.587252 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-config-data" (OuterVolumeSpecName: "config-data") pod "ab3be26e-b15a-45b0-a4e4-dd3f21ca1302" (UID: "ab3be26e-b15a-45b0-a4e4-dd3f21ca1302"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.600331 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:32 crc kubenswrapper[4835]: I0216 15:26:32.657404 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.179868 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab3be26e-b15a-45b0-a4e4-dd3f21ca1302","Type":"ContainerDied","Data":"b6a889c7ed26fa4ce0405eb20bedbcf311012e210b1549e718bdb4a77f9d5227"} Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.180179 4835 scope.go:117] "RemoveContainer" containerID="0f7f011f1bcc95eb50e68bb5bcc54abce374e5ff148c2e0f31b23095ab9142d1" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.180272 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.227823 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.248519 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.252825 4835 scope.go:117] "RemoveContainer" containerID="5436b9fa5897cad3d5ee6062bb5c4b4d30dcca12a35ff6d67f35e2c3b301f718" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.270104 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:26:33 crc kubenswrapper[4835]: E0216 15:26:33.270473 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3be26e-b15a-45b0-a4e4-dd3f21ca1302" containerName="cinder-scheduler" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.270485 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3be26e-b15a-45b0-a4e4-dd3f21ca1302" containerName="cinder-scheduler" Feb 16 15:26:33 crc kubenswrapper[4835]: E0216 15:26:33.270516 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3be26e-b15a-45b0-a4e4-dd3f21ca1302" containerName="probe" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.270537 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3be26e-b15a-45b0-a4e4-dd3f21ca1302" containerName="probe" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.270724 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3be26e-b15a-45b0-a4e4-dd3f21ca1302" containerName="probe" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.270742 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3be26e-b15a-45b0-a4e4-dd3f21ca1302" containerName="cinder-scheduler" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.271721 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.275699 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.284418 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.375390 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.375483 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.375515 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.375578 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.375806 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmqwn\" (UniqueName: \"kubernetes.io/projected/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-kube-api-access-kmqwn\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.375890 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.376985 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db8d74b8d-b8dp6"] Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.391959 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a0d8af0-945b-4b99-881f-06f183195461" path="/var/lib/kubelet/pods/7a0d8af0-945b-4b99-881f-06f183195461/volumes" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.392741 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab3be26e-b15a-45b0-a4e4-dd3f21ca1302" path="/var/lib/kubelet/pods/ab3be26e-b15a-45b0-a4e4-dd3f21ca1302/volumes" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.477383 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.477445 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.477508 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.477593 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmqwn\" (UniqueName: \"kubernetes.io/projected/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-kube-api-access-kmqwn\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.477633 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.477662 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.480834 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.493030 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.493085 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.494798 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.496264 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmqwn\" (UniqueName: \"kubernetes.io/projected/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-kube-api-access-kmqwn\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.497877 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:26:33 crc kubenswrapper[4835]: I0216 15:26:33.613887 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:26:34 crc kubenswrapper[4835]: I0216 15:26:34.146149 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:26:34 crc kubenswrapper[4835]: W0216 15:26:34.147494 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff575bdb_76c0_4c2d_8fb5_fa5f7efbadfc.slice/crio-69e1577f20448b521fc803dd6272d6031ef61c17affbc97fe73361d34a34737c WatchSource:0}: Error finding container 69e1577f20448b521fc803dd6272d6031ef61c17affbc97fe73361d34a34737c: Status 404 returned error can't find the container with id 69e1577f20448b521fc803dd6272d6031ef61c17affbc97fe73361d34a34737c Feb 16 15:26:34 crc kubenswrapper[4835]: I0216 15:26:34.195258 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db8d74b8d-b8dp6" event={"ID":"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136","Type":"ContainerStarted","Data":"6ecc4aa093f688a548ed9e381fe90735f06c9e5d69efa5d199e0f4f511780dcc"} Feb 16 15:26:34 crc kubenswrapper[4835]: I0216 15:26:34.195310 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db8d74b8d-b8dp6" event={"ID":"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136","Type":"ContainerStarted","Data":"6fb66b698dd5b8be68d56301a65dc928da8c10071dd7a5faf485df5bfcb1f79e"} Feb 16 15:26:34 crc kubenswrapper[4835]: I0216 15:26:34.195324 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db8d74b8d-b8dp6" event={"ID":"ad10e2a4-7521-47a2-bf1f-d1e4b83b1136","Type":"ContainerStarted","Data":"d3ac6673c878d271dd9f89a99ca9563263a14dc3d375d8102e7fdfd3b1944111"} Feb 16 15:26:34 crc kubenswrapper[4835]: I0216 15:26:34.195359 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:34 crc kubenswrapper[4835]: I0216 15:26:34.195374 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:26:34 crc kubenswrapper[4835]: I0216 15:26:34.196980 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc","Type":"ContainerStarted","Data":"69e1577f20448b521fc803dd6272d6031ef61c17affbc97fe73361d34a34737c"} Feb 16 15:26:34 crc kubenswrapper[4835]: I0216 15:26:34.230601 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db8d74b8d-b8dp6" podStartSLOduration=2.230582419 podStartE2EDuration="2.230582419s" podCreationTimestamp="2026-02-16 15:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:26:34.217293493 +0000 UTC m=+1143.509286398" watchObservedRunningTime="2026-02-16 15:26:34.230582419 +0000 UTC m=+1143.522575314" Feb 16 15:26:34 crc kubenswrapper[4835]: I0216 15:26:34.555193 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:34 crc kubenswrapper[4835]: I0216 15:26:34.664472 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-555c89cd64-w6qv5" Feb 16 15:26:34 crc kubenswrapper[4835]: I0216 15:26:34.727674 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-658f7fbf5b-sv58x"] Feb 16 15:26:34 crc kubenswrapper[4835]: I0216 15:26:34.728099 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-658f7fbf5b-sv58x" podUID="6bd7f83f-1998-4efd-96eb-3287d2c721c4" containerName="barbican-api-log" containerID="cri-o://21a59e9e2b6b7051d51e25711d66dda32b7962fb5d2140fec0d523215ca58872" gracePeriod=30 Feb 16 15:26:34 crc kubenswrapper[4835]: I0216 15:26:34.728602 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-658f7fbf5b-sv58x" podUID="6bd7f83f-1998-4efd-96eb-3287d2c721c4" containerName="barbican-api" containerID="cri-o://c8838385f953194be7654695a4e7a775cfb7f55424c262061d6ec5be78b70914" gracePeriod=30 Feb 16 15:26:35 crc kubenswrapper[4835]: I0216 15:26:35.212805 4835 generic.go:334] "Generic (PLEG): container finished" podID="6bd7f83f-1998-4efd-96eb-3287d2c721c4" containerID="21a59e9e2b6b7051d51e25711d66dda32b7962fb5d2140fec0d523215ca58872" exitCode=143 Feb 16 15:26:35 crc kubenswrapper[4835]: I0216 15:26:35.212910 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-658f7fbf5b-sv58x" event={"ID":"6bd7f83f-1998-4efd-96eb-3287d2c721c4","Type":"ContainerDied","Data":"21a59e9e2b6b7051d51e25711d66dda32b7962fb5d2140fec0d523215ca58872"} Feb 16 15:26:35 crc kubenswrapper[4835]: I0216 15:26:35.216175 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc","Type":"ContainerStarted","Data":"a5c6a8bbbe3cad156e5b58b1124fa8686a517448e4df65223d8348517049f56e"} Feb 16 15:26:36 crc kubenswrapper[4835]: I0216 15:26:36.235016 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc","Type":"ContainerStarted","Data":"683d5d838dd12f2e299a53820442e0b065a7ee6ba11c27bb0da1fe1fd2bf60c8"} Feb 16 15:26:36 crc kubenswrapper[4835]: I0216 15:26:36.288433 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.288413772 podStartE2EDuration="3.288413772s" podCreationTimestamp="2026-02-16 15:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:26:36.254108249 +0000 UTC m=+1145.546101164" watchObservedRunningTime="2026-02-16 15:26:36.288413772 +0000 UTC m=+1145.580406667" Feb 16 15:26:37 crc kubenswrapper[4835]: I0216 15:26:37.925167 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-658f7fbf5b-sv58x" podUID="6bd7f83f-1998-4efd-96eb-3287d2c721c4" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": read tcp 10.217.0.2:32892->10.217.0.178:9311: read: connection reset by peer" Feb 16 15:26:37 crc kubenswrapper[4835]: I0216 15:26:37.925196 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-658f7fbf5b-sv58x" podUID="6bd7f83f-1998-4efd-96eb-3287d2c721c4" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": read tcp 10.217.0.2:32888->10.217.0.178:9311: read: connection reset by peer" Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.351387 4835 generic.go:334] "Generic (PLEG): container finished" podID="6bd7f83f-1998-4efd-96eb-3287d2c721c4" containerID="c8838385f953194be7654695a4e7a775cfb7f55424c262061d6ec5be78b70914" exitCode=0 Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.351713 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-658f7fbf5b-sv58x" event={"ID":"6bd7f83f-1998-4efd-96eb-3287d2c721c4","Type":"ContainerDied","Data":"c8838385f953194be7654695a4e7a775cfb7f55424c262061d6ec5be78b70914"} Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.527359 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.614363 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.703465 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-config-data\") pod \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.703569 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-config-data-custom\") pod \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.703608 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-combined-ca-bundle\") pod \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.703663 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd7f83f-1998-4efd-96eb-3287d2c721c4-logs\") pod \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.703716 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm6v5\" (UniqueName: \"kubernetes.io/projected/6bd7f83f-1998-4efd-96eb-3287d2c721c4-kube-api-access-vm6v5\") pod \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\" (UID: \"6bd7f83f-1998-4efd-96eb-3287d2c721c4\") " Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.705913 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd7f83f-1998-4efd-96eb-3287d2c721c4-logs" (OuterVolumeSpecName: "logs") pod "6bd7f83f-1998-4efd-96eb-3287d2c721c4" (UID: "6bd7f83f-1998-4efd-96eb-3287d2c721c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.710131 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd7f83f-1998-4efd-96eb-3287d2c721c4-kube-api-access-vm6v5" (OuterVolumeSpecName: "kube-api-access-vm6v5") pod "6bd7f83f-1998-4efd-96eb-3287d2c721c4" (UID: "6bd7f83f-1998-4efd-96eb-3287d2c721c4"). InnerVolumeSpecName "kube-api-access-vm6v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.711209 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6bd7f83f-1998-4efd-96eb-3287d2c721c4" (UID: "6bd7f83f-1998-4efd-96eb-3287d2c721c4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.731651 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bd7f83f-1998-4efd-96eb-3287d2c721c4" (UID: "6bd7f83f-1998-4efd-96eb-3287d2c721c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.753858 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-config-data" (OuterVolumeSpecName: "config-data") pod "6bd7f83f-1998-4efd-96eb-3287d2c721c4" (UID: "6bd7f83f-1998-4efd-96eb-3287d2c721c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.806354 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm6v5\" (UniqueName: \"kubernetes.io/projected/6bd7f83f-1998-4efd-96eb-3287d2c721c4-kube-api-access-vm6v5\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.806384 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.806393 4835 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.806403 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd7f83f-1998-4efd-96eb-3287d2c721c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:38 crc kubenswrapper[4835]: I0216 15:26:38.806413 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd7f83f-1998-4efd-96eb-3287d2c721c4-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:39 crc kubenswrapper[4835]: I0216 15:26:39.365453 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-658f7fbf5b-sv58x" event={"ID":"6bd7f83f-1998-4efd-96eb-3287d2c721c4","Type":"ContainerDied","Data":"1b185866603ad532ad54735d7e0242fc5b8f22e15a097cd7841891045efb1d1c"} Feb 16 15:26:39 crc kubenswrapper[4835]: I0216 15:26:39.365523 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-658f7fbf5b-sv58x" Feb 16 15:26:39 crc kubenswrapper[4835]: I0216 15:26:39.365601 4835 scope.go:117] "RemoveContainer" containerID="c8838385f953194be7654695a4e7a775cfb7f55424c262061d6ec5be78b70914" Feb 16 15:26:39 crc kubenswrapper[4835]: I0216 15:26:39.389541 4835 scope.go:117] "RemoveContainer" containerID="21a59e9e2b6b7051d51e25711d66dda32b7962fb5d2140fec0d523215ca58872" Feb 16 15:26:39 crc kubenswrapper[4835]: I0216 15:26:39.448317 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-658f7fbf5b-sv58x"] Feb 16 15:26:39 crc kubenswrapper[4835]: I0216 15:26:39.457793 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-658f7fbf5b-sv58x"] Feb 16 15:26:39 crc kubenswrapper[4835]: I0216 15:26:39.857394 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6f568f5d7f-9h6bh" Feb 16 15:26:41 crc kubenswrapper[4835]: I0216 15:26:41.387411 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bd7f83f-1998-4efd-96eb-3287d2c721c4" path="/var/lib/kubelet/pods/6bd7f83f-1998-4efd-96eb-3287d2c721c4/volumes" Feb 16 15:26:41 crc kubenswrapper[4835]: I0216 15:26:41.918264 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 15:26:41 crc kubenswrapper[4835]: E0216 15:26:41.918669 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd7f83f-1998-4efd-96eb-3287d2c721c4" containerName="barbican-api" Feb 16 15:26:41 crc kubenswrapper[4835]: I0216 15:26:41.918688 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd7f83f-1998-4efd-96eb-3287d2c721c4" containerName="barbican-api" Feb 16 15:26:41 crc kubenswrapper[4835]: E0216 15:26:41.918701 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd7f83f-1998-4efd-96eb-3287d2c721c4" containerName="barbican-api-log" Feb 16 15:26:41 crc kubenswrapper[4835]: I0216 15:26:41.918708 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd7f83f-1998-4efd-96eb-3287d2c721c4" containerName="barbican-api-log" Feb 16 15:26:41 crc kubenswrapper[4835]: I0216 15:26:41.918867 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd7f83f-1998-4efd-96eb-3287d2c721c4" containerName="barbican-api" Feb 16 15:26:41 crc kubenswrapper[4835]: I0216 15:26:41.918891 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd7f83f-1998-4efd-96eb-3287d2c721c4" containerName="barbican-api-log" Feb 16 15:26:41 crc kubenswrapper[4835]: I0216 15:26:41.919543 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 15:26:41 crc kubenswrapper[4835]: I0216 15:26:41.921396 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 15:26:41 crc kubenswrapper[4835]: I0216 15:26:41.922352 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-jljcl" Feb 16 15:26:41 crc kubenswrapper[4835]: I0216 15:26:41.922599 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 15:26:41 crc kubenswrapper[4835]: I0216 15:26:41.945541 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.089386 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rvkb\" (UniqueName: \"kubernetes.io/projected/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-kube-api-access-9rvkb\") pod \"openstackclient\" (UID: \"6f079f1c-5da0-4ead-88a8-7c7b0a30b655\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.089503 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-openstack-config\") pod \"openstackclient\" (UID: \"6f079f1c-5da0-4ead-88a8-7c7b0a30b655\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.089579 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6f079f1c-5da0-4ead-88a8-7c7b0a30b655\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.089699 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-openstack-config-secret\") pod \"openstackclient\" (UID: \"6f079f1c-5da0-4ead-88a8-7c7b0a30b655\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.169386 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 16 15:26:42 crc kubenswrapper[4835]: E0216 15:26:42.170157 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-9rvkb openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="6f079f1c-5da0-4ead-88a8-7c7b0a30b655" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.179578 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.190954 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rvkb\" (UniqueName: \"kubernetes.io/projected/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-kube-api-access-9rvkb\") pod \"openstackclient\" (UID: \"6f079f1c-5da0-4ead-88a8-7c7b0a30b655\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.191048 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-openstack-config\") pod \"openstackclient\" (UID: \"6f079f1c-5da0-4ead-88a8-7c7b0a30b655\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.191082 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6f079f1c-5da0-4ead-88a8-7c7b0a30b655\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.191163 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-openstack-config-secret\") pod \"openstackclient\" (UID: \"6f079f1c-5da0-4ead-88a8-7c7b0a30b655\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.192876 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-openstack-config\") pod \"openstackclient\" (UID: \"6f079f1c-5da0-4ead-88a8-7c7b0a30b655\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: E0216 15:26:42.194789 4835 projected.go:194] Error preparing data for projected volume kube-api-access-9rvkb for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (6f079f1c-5da0-4ead-88a8-7c7b0a30b655) does not match the UID in record. The object might have been deleted and then recreated Feb 16 15:26:42 crc kubenswrapper[4835]: E0216 15:26:42.194860 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-kube-api-access-9rvkb podName:6f079f1c-5da0-4ead-88a8-7c7b0a30b655 nodeName:}" failed. No retries permitted until 2026-02-16 15:26:42.694841667 +0000 UTC m=+1151.986834562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9rvkb" (UniqueName: "kubernetes.io/projected/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-kube-api-access-9rvkb") pod "openstackclient" (UID: "6f079f1c-5da0-4ead-88a8-7c7b0a30b655") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (6f079f1c-5da0-4ead-88a8-7c7b0a30b655) does not match the UID in record. The object might have been deleted and then recreated Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.202270 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6f079f1c-5da0-4ead-88a8-7c7b0a30b655\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.205395 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-openstack-config-secret\") pod \"openstackclient\" (UID: \"6f079f1c-5da0-4ead-88a8-7c7b0a30b655\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.225106 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.226388 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.245326 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.398169 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.399063 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0764574-e9ce-46bd-9cf5-7aefa9b455db-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a0764574-e9ce-46bd-9cf5-7aefa9b455db\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.399298 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a0764574-e9ce-46bd-9cf5-7aefa9b455db-openstack-config\") pod \"openstackclient\" (UID: \"a0764574-e9ce-46bd-9cf5-7aefa9b455db\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.399363 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a0764574-e9ce-46bd-9cf5-7aefa9b455db-openstack-config-secret\") pod \"openstackclient\" (UID: \"a0764574-e9ce-46bd-9cf5-7aefa9b455db\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.399436 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpmbq\" (UniqueName: \"kubernetes.io/projected/a0764574-e9ce-46bd-9cf5-7aefa9b455db-kube-api-access-hpmbq\") pod \"openstackclient\" (UID: \"a0764574-e9ce-46bd-9cf5-7aefa9b455db\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.407753 4835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6f079f1c-5da0-4ead-88a8-7c7b0a30b655" podUID="a0764574-e9ce-46bd-9cf5-7aefa9b455db" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.414452 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.500775 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0764574-e9ce-46bd-9cf5-7aefa9b455db-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a0764574-e9ce-46bd-9cf5-7aefa9b455db\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.500915 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a0764574-e9ce-46bd-9cf5-7aefa9b455db-openstack-config\") pod \"openstackclient\" (UID: \"a0764574-e9ce-46bd-9cf5-7aefa9b455db\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.500946 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a0764574-e9ce-46bd-9cf5-7aefa9b455db-openstack-config-secret\") pod \"openstackclient\" (UID: \"a0764574-e9ce-46bd-9cf5-7aefa9b455db\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.500992 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpmbq\" (UniqueName: \"kubernetes.io/projected/a0764574-e9ce-46bd-9cf5-7aefa9b455db-kube-api-access-hpmbq\") pod \"openstackclient\" (UID: \"a0764574-e9ce-46bd-9cf5-7aefa9b455db\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.501739 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a0764574-e9ce-46bd-9cf5-7aefa9b455db-openstack-config\") pod \"openstackclient\" (UID: \"a0764574-e9ce-46bd-9cf5-7aefa9b455db\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.504244 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a0764574-e9ce-46bd-9cf5-7aefa9b455db-openstack-config-secret\") pod \"openstackclient\" (UID: \"a0764574-e9ce-46bd-9cf5-7aefa9b455db\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.504997 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0764574-e9ce-46bd-9cf5-7aefa9b455db-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a0764574-e9ce-46bd-9cf5-7aefa9b455db\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.522606 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpmbq\" (UniqueName: \"kubernetes.io/projected/a0764574-e9ce-46bd-9cf5-7aefa9b455db-kube-api-access-hpmbq\") pod \"openstackclient\" (UID: \"a0764574-e9ce-46bd-9cf5-7aefa9b455db\") " pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.590460 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.603010 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-openstack-config\") pod \"6f079f1c-5da0-4ead-88a8-7c7b0a30b655\" (UID: \"6f079f1c-5da0-4ead-88a8-7c7b0a30b655\") " Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.603156 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-openstack-config-secret\") pod \"6f079f1c-5da0-4ead-88a8-7c7b0a30b655\" (UID: \"6f079f1c-5da0-4ead-88a8-7c7b0a30b655\") " Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.603231 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-combined-ca-bundle\") pod \"6f079f1c-5da0-4ead-88a8-7c7b0a30b655\" (UID: \"6f079f1c-5da0-4ead-88a8-7c7b0a30b655\") " Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.603518 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6f079f1c-5da0-4ead-88a8-7c7b0a30b655" (UID: "6f079f1c-5da0-4ead-88a8-7c7b0a30b655"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.603870 4835 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.603893 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rvkb\" (UniqueName: \"kubernetes.io/projected/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-kube-api-access-9rvkb\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.606971 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f079f1c-5da0-4ead-88a8-7c7b0a30b655" (UID: "6f079f1c-5da0-4ead-88a8-7c7b0a30b655"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.608680 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6f079f1c-5da0-4ead-88a8-7c7b0a30b655" (UID: "6f079f1c-5da0-4ead-88a8-7c7b0a30b655"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.705660 4835 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:42 crc kubenswrapper[4835]: I0216 15:26:42.705947 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f079f1c-5da0-4ead-88a8-7c7b0a30b655-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:43 crc kubenswrapper[4835]: I0216 15:26:43.075321 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 15:26:43 crc kubenswrapper[4835]: W0216 15:26:43.083202 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0764574_e9ce_46bd_9cf5_7aefa9b455db.slice/crio-a674b8a626cce3f1a30e60c933541673eb20b7226d10f42ad2ea47c37ca8afc5 WatchSource:0}: Error finding container a674b8a626cce3f1a30e60c933541673eb20b7226d10f42ad2ea47c37ca8afc5: Status 404 returned error can't find the container with id a674b8a626cce3f1a30e60c933541673eb20b7226d10f42ad2ea47c37ca8afc5 Feb 16 15:26:43 crc kubenswrapper[4835]: I0216 15:26:43.390228 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f079f1c-5da0-4ead-88a8-7c7b0a30b655" path="/var/lib/kubelet/pods/6f079f1c-5da0-4ead-88a8-7c7b0a30b655/volumes" Feb 16 15:26:43 crc kubenswrapper[4835]: I0216 15:26:43.407344 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 15:26:43 crc kubenswrapper[4835]: I0216 15:26:43.407400 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a0764574-e9ce-46bd-9cf5-7aefa9b455db","Type":"ContainerStarted","Data":"a674b8a626cce3f1a30e60c933541673eb20b7226d10f42ad2ea47c37ca8afc5"} Feb 16 15:26:43 crc kubenswrapper[4835]: I0216 15:26:43.417087 4835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6f079f1c-5da0-4ead-88a8-7c7b0a30b655" podUID="a0764574-e9ce-46bd-9cf5-7aefa9b455db" Feb 16 15:26:43 crc kubenswrapper[4835]: I0216 15:26:43.923813 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 15:26:44 crc kubenswrapper[4835]: E0216 15:26:44.381811 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.482643 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-bb74b7cf9-wlnq6"] Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.484262 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.488140 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.488338 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.488551 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.512041 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-bb74b7cf9-wlnq6"] Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.658958 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-etc-swift\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.659024 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-internal-tls-certs\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.659101 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-log-httpd\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.659152 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64g8m\" (UniqueName: \"kubernetes.io/projected/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-kube-api-access-64g8m\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.659205 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-public-tls-certs\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.659227 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-combined-ca-bundle\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.659247 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-run-httpd\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.659267 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-config-data\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.761129 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-log-httpd\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.761666 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-log-httpd\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.761829 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64g8m\" (UniqueName: \"kubernetes.io/projected/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-kube-api-access-64g8m\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.762241 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-public-tls-certs\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.763140 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-combined-ca-bundle\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.763183 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-run-httpd\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.763211 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-config-data\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.763243 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-etc-swift\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.763296 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-internal-tls-certs\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.763766 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-run-httpd\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.771220 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-internal-tls-certs\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.771308 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-config-data\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.771647 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-etc-swift\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.771887 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-combined-ca-bundle\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.772108 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-public-tls-certs\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.779634 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64g8m\" (UniqueName: \"kubernetes.io/projected/cb2fb4f7-7c30-454f-8b06-8ab94eed8429-kube-api-access-64g8m\") pod \"swift-proxy-bb74b7cf9-wlnq6\" (UID: \"cb2fb4f7-7c30-454f-8b06-8ab94eed8429\") " pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:45 crc kubenswrapper[4835]: I0216 15:26:45.808422 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:46 crc kubenswrapper[4835]: I0216 15:26:46.374228 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-bb74b7cf9-wlnq6"] Feb 16 15:26:46 crc kubenswrapper[4835]: W0216 15:26:46.380467 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb2fb4f7_7c30_454f_8b06_8ab94eed8429.slice/crio-323e9fca5c74327c6db11a79bc1f727bfac0168f3b9c6b9119f8000717df70d5 WatchSource:0}: Error finding container 323e9fca5c74327c6db11a79bc1f727bfac0168f3b9c6b9119f8000717df70d5: Status 404 returned error can't find the container with id 323e9fca5c74327c6db11a79bc1f727bfac0168f3b9c6b9119f8000717df70d5 Feb 16 15:26:46 crc kubenswrapper[4835]: I0216 15:26:46.453455 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-bb74b7cf9-wlnq6" event={"ID":"cb2fb4f7-7c30-454f-8b06-8ab94eed8429","Type":"ContainerStarted","Data":"323e9fca5c74327c6db11a79bc1f727bfac0168f3b9c6b9119f8000717df70d5"} Feb 16 15:26:47 crc kubenswrapper[4835]: I0216 15:26:47.464433 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-bb74b7cf9-wlnq6" event={"ID":"cb2fb4f7-7c30-454f-8b06-8ab94eed8429","Type":"ContainerStarted","Data":"55fd74215eeed75ca6ba526a9f21c37796903caef28e38634ca412672318a2e3"} Feb 16 15:26:47 crc kubenswrapper[4835]: I0216 15:26:47.464746 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-bb74b7cf9-wlnq6" event={"ID":"cb2fb4f7-7c30-454f-8b06-8ab94eed8429","Type":"ContainerStarted","Data":"c0825039a94b6259dc207afb2a420776d8d67b71852051ad0eb0338839629fef"} Feb 16 15:26:47 crc kubenswrapper[4835]: I0216 15:26:47.464921 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.077560 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-bb74b7cf9-wlnq6" podStartSLOduration=3.077519974 podStartE2EDuration="3.077519974s" podCreationTimestamp="2026-02-16 15:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:26:47.480936843 +0000 UTC m=+1156.772929738" watchObservedRunningTime="2026-02-16 15:26:48.077519974 +0000 UTC m=+1157.369512869" Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.080080 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.080296 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8c382add-fd25-4394-a43a-b4992607986b" containerName="glance-log" containerID="cri-o://09eb3c91b9f2c05cfd9345b6506600721f584fdf65ae03217acae2ed909637a4" gracePeriod=30 Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.080385 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8c382add-fd25-4394-a43a-b4992607986b" containerName="glance-httpd" containerID="cri-o://2bbf685772ba686e58a400b5e59844fd03d345772e79c6e5018b55a3734fc7fe" gracePeriod=30 Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.477313 4835 generic.go:334] "Generic (PLEG): container finished" podID="8c382add-fd25-4394-a43a-b4992607986b" containerID="09eb3c91b9f2c05cfd9345b6506600721f584fdf65ae03217acae2ed909637a4" exitCode=143 Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.477392 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c382add-fd25-4394-a43a-b4992607986b","Type":"ContainerDied","Data":"09eb3c91b9f2c05cfd9345b6506600721f584fdf65ae03217acae2ed909637a4"} Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.477585 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.534164 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.534429 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="ceilometer-central-agent" containerID="cri-o://f3f8e09d5e948b06479284ded54feb7fa3a267eea80d69de2872015b0f731cac" gracePeriod=30 Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.534484 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="ceilometer-notification-agent" containerID="cri-o://9e2b082874c49185f54f8d2208da66f11fd430f24757f32be3226493c6db692a" gracePeriod=30 Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.534492 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="sg-core" containerID="cri-o://e431095ba5f0364f8b7718602fa1fd38bb6cfc13e8b652a30aabc4e602cf8ca4" gracePeriod=30 Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.534481 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="proxy-httpd" containerID="cri-o://a687e0f9b23ec6f5225aaece7449a2b155b00a467b65b6bdcd8f8fb2b736e916" gracePeriod=30 Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.540979 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.182:3000/\": EOF" Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.586653 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.586707 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.586750 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.587604 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55a8425e60a5ca5af019911f05c32c6de22275f80b64e52b734846168a32e3b3"} pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:26:48 crc kubenswrapper[4835]: I0216 15:26:48.587663 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" containerID="cri-o://55a8425e60a5ca5af019911f05c32c6de22275f80b64e52b734846168a32e3b3" gracePeriod=600 Feb 16 15:26:48 crc kubenswrapper[4835]: E0216 15:26:48.874586 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50108a39_fee9_46bc_a8f0_6c250e5fb27e.slice/crio-conmon-a687e0f9b23ec6f5225aaece7449a2b155b00a467b65b6bdcd8f8fb2b736e916.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:26:49 crc kubenswrapper[4835]: I0216 15:26:49.495224 4835 generic.go:334] "Generic (PLEG): container finished" podID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerID="55a8425e60a5ca5af019911f05c32c6de22275f80b64e52b734846168a32e3b3" exitCode=0 Feb 16 15:26:49 crc kubenswrapper[4835]: I0216 15:26:49.495301 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerDied","Data":"55a8425e60a5ca5af019911f05c32c6de22275f80b64e52b734846168a32e3b3"} Feb 16 15:26:49 crc kubenswrapper[4835]: I0216 15:26:49.495505 4835 scope.go:117] "RemoveContainer" containerID="fccec89c350093c4d7a854530c72eda475f0a4084457fa6bd80b80278b734735" Feb 16 15:26:49 crc kubenswrapper[4835]: I0216 15:26:49.499807 4835 generic.go:334] "Generic (PLEG): container finished" podID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerID="a687e0f9b23ec6f5225aaece7449a2b155b00a467b65b6bdcd8f8fb2b736e916" exitCode=0 Feb 16 15:26:49 crc kubenswrapper[4835]: I0216 15:26:49.499832 4835 generic.go:334] "Generic (PLEG): container finished" podID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerID="e431095ba5f0364f8b7718602fa1fd38bb6cfc13e8b652a30aabc4e602cf8ca4" exitCode=2 Feb 16 15:26:49 crc kubenswrapper[4835]: I0216 15:26:49.499840 4835 generic.go:334] "Generic (PLEG): container finished" podID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerID="f3f8e09d5e948b06479284ded54feb7fa3a267eea80d69de2872015b0f731cac" exitCode=0 Feb 16 15:26:49 crc kubenswrapper[4835]: I0216 15:26:49.499867 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50108a39-fee9-46bc-a8f0-6c250e5fb27e","Type":"ContainerDied","Data":"a687e0f9b23ec6f5225aaece7449a2b155b00a467b65b6bdcd8f8fb2b736e916"} Feb 16 15:26:49 crc kubenswrapper[4835]: I0216 15:26:49.499903 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50108a39-fee9-46bc-a8f0-6c250e5fb27e","Type":"ContainerDied","Data":"e431095ba5f0364f8b7718602fa1fd38bb6cfc13e8b652a30aabc4e602cf8ca4"} Feb 16 15:26:49 crc kubenswrapper[4835]: I0216 15:26:49.499914 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50108a39-fee9-46bc-a8f0-6c250e5fb27e","Type":"ContainerDied","Data":"f3f8e09d5e948b06479284ded54feb7fa3a267eea80d69de2872015b0f731cac"} Feb 16 15:26:49 crc kubenswrapper[4835]: I0216 15:26:49.632225 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:26:49 crc kubenswrapper[4835]: I0216 15:26:49.632471 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="090f6dde-5b4b-4154-8123-6e4ba3d0e295" containerName="glance-log" containerID="cri-o://0f57339025b6b66ed34b035c9d8e04adda9f38a5c30eabde613fb2f3facca854" gracePeriod=30 Feb 16 15:26:49 crc kubenswrapper[4835]: I0216 15:26:49.632545 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="090f6dde-5b4b-4154-8123-6e4ba3d0e295" containerName="glance-httpd" containerID="cri-o://d056f12806ae54de0097c6e6e0ee6ca75c285056e5e1d04e50fe02e2259fe295" gracePeriod=30 Feb 16 15:26:50 crc kubenswrapper[4835]: I0216 15:26:50.511629 4835 generic.go:334] "Generic (PLEG): container finished" podID="090f6dde-5b4b-4154-8123-6e4ba3d0e295" containerID="0f57339025b6b66ed34b035c9d8e04adda9f38a5c30eabde613fb2f3facca854" exitCode=143 Feb 16 15:26:50 crc kubenswrapper[4835]: I0216 15:26:50.511707 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"090f6dde-5b4b-4154-8123-6e4ba3d0e295","Type":"ContainerDied","Data":"0f57339025b6b66ed34b035c9d8e04adda9f38a5c30eabde613fb2f3facca854"} Feb 16 15:26:51 crc kubenswrapper[4835]: I0216 15:26:51.208292 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.182:3000/\": dial tcp 10.217.0.182:3000: connect: connection refused" Feb 16 15:26:51 crc kubenswrapper[4835]: I0216 15:26:51.523568 4835 generic.go:334] "Generic (PLEG): container finished" podID="8c382add-fd25-4394-a43a-b4992607986b" containerID="2bbf685772ba686e58a400b5e59844fd03d345772e79c6e5018b55a3734fc7fe" exitCode=0 Feb 16 15:26:51 crc kubenswrapper[4835]: I0216 15:26:51.523653 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c382add-fd25-4394-a43a-b4992607986b","Type":"ContainerDied","Data":"2bbf685772ba686e58a400b5e59844fd03d345772e79c6e5018b55a3734fc7fe"} Feb 16 15:26:51 crc kubenswrapper[4835]: I0216 15:26:51.525841 4835 generic.go:334] "Generic (PLEG): container finished" podID="00c16d9d-7ff4-4112-a586-11c72b643cd5" containerID="944468c04ff762fe324503254045979a3d56eff0887c6541397ac7ab3142bdcb" exitCode=137 Feb 16 15:26:51 crc kubenswrapper[4835]: I0216 15:26:51.525883 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"00c16d9d-7ff4-4112-a586-11c72b643cd5","Type":"ContainerDied","Data":"944468c04ff762fe324503254045979a3d56eff0887c6541397ac7ab3142bdcb"} Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.316569 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-9xl7p"] Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.317758 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9xl7p" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.336460 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9xl7p"] Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.395991 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="00c16d9d-7ff4-4112-a586-11c72b643cd5" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.181:8776/healthcheck\": dial tcp 10.217.0.181:8776: connect: connection refused" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.425693 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-l5r6k"] Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.429376 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c5bb\" (UniqueName: \"kubernetes.io/projected/b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7-kube-api-access-8c5bb\") pod \"nova-api-db-create-9xl7p\" (UID: \"b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7\") " pod="openstack/nova-api-db-create-9xl7p" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.429551 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7-operator-scripts\") pod \"nova-api-db-create-9xl7p\" (UID: \"b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7\") " pod="openstack/nova-api-db-create-9xl7p" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.431557 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l5r6k" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.468593 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-l5r6k"] Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.512113 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ca02-account-create-update-cwxqj"] Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.521070 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ca02-account-create-update-cwxqj" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.528057 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.531358 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c5bb\" (UniqueName: \"kubernetes.io/projected/b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7-kube-api-access-8c5bb\") pod \"nova-api-db-create-9xl7p\" (UID: \"b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7\") " pod="openstack/nova-api-db-create-9xl7p" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.531826 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b8f4549-89bc-42f8-9c99-64f495486dc9-operator-scripts\") pod \"nova-cell0-db-create-l5r6k\" (UID: \"4b8f4549-89bc-42f8-9c99-64f495486dc9\") " pod="openstack/nova-cell0-db-create-l5r6k" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.531853 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n7ph\" (UniqueName: \"kubernetes.io/projected/4b8f4549-89bc-42f8-9c99-64f495486dc9-kube-api-access-9n7ph\") pod \"nova-cell0-db-create-l5r6k\" (UID: \"4b8f4549-89bc-42f8-9c99-64f495486dc9\") " pod="openstack/nova-cell0-db-create-l5r6k" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.531917 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7-operator-scripts\") pod \"nova-api-db-create-9xl7p\" (UID: \"b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7\") " pod="openstack/nova-api-db-create-9xl7p" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.533960 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7-operator-scripts\") pod \"nova-api-db-create-9xl7p\" (UID: \"b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7\") " pod="openstack/nova-api-db-create-9xl7p" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.549651 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ca02-account-create-update-cwxqj"] Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.569455 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c5bb\" (UniqueName: \"kubernetes.io/projected/b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7-kube-api-access-8c5bb\") pod \"nova-api-db-create-9xl7p\" (UID: \"b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7\") " pod="openstack/nova-api-db-create-9xl7p" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.594902 4835 generic.go:334] "Generic (PLEG): container finished" podID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerID="9e2b082874c49185f54f8d2208da66f11fd430f24757f32be3226493c6db692a" exitCode=0 Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.594944 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50108a39-fee9-46bc-a8f0-6c250e5fb27e","Type":"ContainerDied","Data":"9e2b082874c49185f54f8d2208da66f11fd430f24757f32be3226493c6db692a"} Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.601400 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-qgglt"] Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.602808 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qgglt" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.628329 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qgglt"] Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.633279 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b8f4549-89bc-42f8-9c99-64f495486dc9-operator-scripts\") pod \"nova-cell0-db-create-l5r6k\" (UID: \"4b8f4549-89bc-42f8-9c99-64f495486dc9\") " pod="openstack/nova-cell0-db-create-l5r6k" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.633312 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/488a0795-d2d4-4eb0-899a-305faff595d5-operator-scripts\") pod \"nova-api-ca02-account-create-update-cwxqj\" (UID: \"488a0795-d2d4-4eb0-899a-305faff595d5\") " pod="openstack/nova-api-ca02-account-create-update-cwxqj" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.633334 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n7ph\" (UniqueName: \"kubernetes.io/projected/4b8f4549-89bc-42f8-9c99-64f495486dc9-kube-api-access-9n7ph\") pod \"nova-cell0-db-create-l5r6k\" (UID: \"4b8f4549-89bc-42f8-9c99-64f495486dc9\") " pod="openstack/nova-cell0-db-create-l5r6k" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.633388 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f4zp\" (UniqueName: \"kubernetes.io/projected/488a0795-d2d4-4eb0-899a-305faff595d5-kube-api-access-7f4zp\") pod \"nova-api-ca02-account-create-update-cwxqj\" (UID: \"488a0795-d2d4-4eb0-899a-305faff595d5\") " pod="openstack/nova-api-ca02-account-create-update-cwxqj" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.634082 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b8f4549-89bc-42f8-9c99-64f495486dc9-operator-scripts\") pod \"nova-cell0-db-create-l5r6k\" (UID: \"4b8f4549-89bc-42f8-9c99-64f495486dc9\") " pod="openstack/nova-cell0-db-create-l5r6k" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.634487 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9xl7p" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.649435 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n7ph\" (UniqueName: \"kubernetes.io/projected/4b8f4549-89bc-42f8-9c99-64f495486dc9-kube-api-access-9n7ph\") pod \"nova-cell0-db-create-l5r6k\" (UID: \"4b8f4549-89bc-42f8-9c99-64f495486dc9\") " pod="openstack/nova-cell0-db-create-l5r6k" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.655597 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cbd6-account-create-update-wl4lp"] Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.657152 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cbd6-account-create-update-wl4lp" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.662419 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.670457 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cbd6-account-create-update-wl4lp"] Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.735320 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/488a0795-d2d4-4eb0-899a-305faff595d5-operator-scripts\") pod \"nova-api-ca02-account-create-update-cwxqj\" (UID: \"488a0795-d2d4-4eb0-899a-305faff595d5\") " pod="openstack/nova-api-ca02-account-create-update-cwxqj" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.735401 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f4zp\" (UniqueName: \"kubernetes.io/projected/488a0795-d2d4-4eb0-899a-305faff595d5-kube-api-access-7f4zp\") pod \"nova-api-ca02-account-create-update-cwxqj\" (UID: \"488a0795-d2d4-4eb0-899a-305faff595d5\") " pod="openstack/nova-api-ca02-account-create-update-cwxqj" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.735470 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qk55\" (UniqueName: \"kubernetes.io/projected/64b7782b-02cd-48f4-9955-a3e2a698e687-kube-api-access-2qk55\") pod \"nova-cell1-db-create-qgglt\" (UID: \"64b7782b-02cd-48f4-9955-a3e2a698e687\") " pod="openstack/nova-cell1-db-create-qgglt" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.735515 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b7782b-02cd-48f4-9955-a3e2a698e687-operator-scripts\") pod \"nova-cell1-db-create-qgglt\" (UID: \"64b7782b-02cd-48f4-9955-a3e2a698e687\") " pod="openstack/nova-cell1-db-create-qgglt" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.736126 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/488a0795-d2d4-4eb0-899a-305faff595d5-operator-scripts\") pod \"nova-api-ca02-account-create-update-cwxqj\" (UID: \"488a0795-d2d4-4eb0-899a-305faff595d5\") " pod="openstack/nova-api-ca02-account-create-update-cwxqj" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.751690 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f4zp\" (UniqueName: \"kubernetes.io/projected/488a0795-d2d4-4eb0-899a-305faff595d5-kube-api-access-7f4zp\") pod \"nova-api-ca02-account-create-update-cwxqj\" (UID: \"488a0795-d2d4-4eb0-899a-305faff595d5\") " pod="openstack/nova-api-ca02-account-create-update-cwxqj" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.760445 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l5r6k" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.819291 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7e91-account-create-update-rv2sg"] Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.820635 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7e91-account-create-update-rv2sg" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.822828 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.831745 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7e91-account-create-update-rv2sg"] Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.850905 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ca02-account-create-update-cwxqj" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.852601 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m24hd\" (UniqueName: \"kubernetes.io/projected/48670262-e3bb-41f9-9c4e-cb1ee3608961-kube-api-access-m24hd\") pod \"nova-cell0-cbd6-account-create-update-wl4lp\" (UID: \"48670262-e3bb-41f9-9c4e-cb1ee3608961\") " pod="openstack/nova-cell0-cbd6-account-create-update-wl4lp" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.852957 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qk55\" (UniqueName: \"kubernetes.io/projected/64b7782b-02cd-48f4-9955-a3e2a698e687-kube-api-access-2qk55\") pod \"nova-cell1-db-create-qgglt\" (UID: \"64b7782b-02cd-48f4-9955-a3e2a698e687\") " pod="openstack/nova-cell1-db-create-qgglt" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.853062 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b7782b-02cd-48f4-9955-a3e2a698e687-operator-scripts\") pod \"nova-cell1-db-create-qgglt\" (UID: \"64b7782b-02cd-48f4-9955-a3e2a698e687\") " pod="openstack/nova-cell1-db-create-qgglt" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.853121 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48670262-e3bb-41f9-9c4e-cb1ee3608961-operator-scripts\") pod \"nova-cell0-cbd6-account-create-update-wl4lp\" (UID: \"48670262-e3bb-41f9-9c4e-cb1ee3608961\") " pod="openstack/nova-cell0-cbd6-account-create-update-wl4lp" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.854190 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b7782b-02cd-48f4-9955-a3e2a698e687-operator-scripts\") pod \"nova-cell1-db-create-qgglt\" (UID: \"64b7782b-02cd-48f4-9955-a3e2a698e687\") " pod="openstack/nova-cell1-db-create-qgglt" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.868948 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qk55\" (UniqueName: \"kubernetes.io/projected/64b7782b-02cd-48f4-9955-a3e2a698e687-kube-api-access-2qk55\") pod \"nova-cell1-db-create-qgglt\" (UID: \"64b7782b-02cd-48f4-9955-a3e2a698e687\") " pod="openstack/nova-cell1-db-create-qgglt" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.927864 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qgglt" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.955154 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02c74289-acd3-4046-9eb5-e8668093107a-operator-scripts\") pod \"nova-cell1-7e91-account-create-update-rv2sg\" (UID: \"02c74289-acd3-4046-9eb5-e8668093107a\") " pod="openstack/nova-cell1-7e91-account-create-update-rv2sg" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.955229 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48670262-e3bb-41f9-9c4e-cb1ee3608961-operator-scripts\") pod \"nova-cell0-cbd6-account-create-update-wl4lp\" (UID: \"48670262-e3bb-41f9-9c4e-cb1ee3608961\") " pod="openstack/nova-cell0-cbd6-account-create-update-wl4lp" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.955465 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m24hd\" (UniqueName: \"kubernetes.io/projected/48670262-e3bb-41f9-9c4e-cb1ee3608961-kube-api-access-m24hd\") pod \"nova-cell0-cbd6-account-create-update-wl4lp\" (UID: \"48670262-e3bb-41f9-9c4e-cb1ee3608961\") " pod="openstack/nova-cell0-cbd6-account-create-update-wl4lp" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.955637 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp9pb\" (UniqueName: \"kubernetes.io/projected/02c74289-acd3-4046-9eb5-e8668093107a-kube-api-access-wp9pb\") pod \"nova-cell1-7e91-account-create-update-rv2sg\" (UID: \"02c74289-acd3-4046-9eb5-e8668093107a\") " pod="openstack/nova-cell1-7e91-account-create-update-rv2sg" Feb 16 15:26:52 crc kubenswrapper[4835]: I0216 15:26:52.955946 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48670262-e3bb-41f9-9c4e-cb1ee3608961-operator-scripts\") pod \"nova-cell0-cbd6-account-create-update-wl4lp\" (UID: \"48670262-e3bb-41f9-9c4e-cb1ee3608961\") " pod="openstack/nova-cell0-cbd6-account-create-update-wl4lp" Feb 16 15:26:53 crc kubenswrapper[4835]: I0216 15:26:53.000840 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m24hd\" (UniqueName: \"kubernetes.io/projected/48670262-e3bb-41f9-9c4e-cb1ee3608961-kube-api-access-m24hd\") pod \"nova-cell0-cbd6-account-create-update-wl4lp\" (UID: \"48670262-e3bb-41f9-9c4e-cb1ee3608961\") " pod="openstack/nova-cell0-cbd6-account-create-update-wl4lp" Feb 16 15:26:53 crc kubenswrapper[4835]: I0216 15:26:53.009845 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cbd6-account-create-update-wl4lp" Feb 16 15:26:53 crc kubenswrapper[4835]: I0216 15:26:53.057377 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp9pb\" (UniqueName: \"kubernetes.io/projected/02c74289-acd3-4046-9eb5-e8668093107a-kube-api-access-wp9pb\") pod \"nova-cell1-7e91-account-create-update-rv2sg\" (UID: \"02c74289-acd3-4046-9eb5-e8668093107a\") " pod="openstack/nova-cell1-7e91-account-create-update-rv2sg" Feb 16 15:26:53 crc kubenswrapper[4835]: I0216 15:26:53.057476 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02c74289-acd3-4046-9eb5-e8668093107a-operator-scripts\") pod \"nova-cell1-7e91-account-create-update-rv2sg\" (UID: \"02c74289-acd3-4046-9eb5-e8668093107a\") " pod="openstack/nova-cell1-7e91-account-create-update-rv2sg" Feb 16 15:26:53 crc kubenswrapper[4835]: I0216 15:26:53.058163 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02c74289-acd3-4046-9eb5-e8668093107a-operator-scripts\") pod \"nova-cell1-7e91-account-create-update-rv2sg\" (UID: \"02c74289-acd3-4046-9eb5-e8668093107a\") " pod="openstack/nova-cell1-7e91-account-create-update-rv2sg" Feb 16 15:26:53 crc kubenswrapper[4835]: I0216 15:26:53.072978 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp9pb\" (UniqueName: \"kubernetes.io/projected/02c74289-acd3-4046-9eb5-e8668093107a-kube-api-access-wp9pb\") pod \"nova-cell1-7e91-account-create-update-rv2sg\" (UID: \"02c74289-acd3-4046-9eb5-e8668093107a\") " pod="openstack/nova-cell1-7e91-account-create-update-rv2sg" Feb 16 15:26:53 crc kubenswrapper[4835]: I0216 15:26:53.235123 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7e91-account-create-update-rv2sg" Feb 16 15:26:53 crc kubenswrapper[4835]: I0216 15:26:53.607118 4835 generic.go:334] "Generic (PLEG): container finished" podID="090f6dde-5b4b-4154-8123-6e4ba3d0e295" containerID="d056f12806ae54de0097c6e6e0ee6ca75c285056e5e1d04e50fe02e2259fe295" exitCode=0 Feb 16 15:26:53 crc kubenswrapper[4835]: I0216 15:26:53.607206 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"090f6dde-5b4b-4154-8123-6e4ba3d0e295","Type":"ContainerDied","Data":"d056f12806ae54de0097c6e6e0ee6ca75c285056e5e1d04e50fe02e2259fe295"} Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.640354 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"50108a39-fee9-46bc-a8f0-6c250e5fb27e","Type":"ContainerDied","Data":"6dab0c40c3a8d601eaa4ffe2fbd6d36cf6fb992d80d2cc4f5918b43d5a722003"} Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.642166 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dab0c40c3a8d601eaa4ffe2fbd6d36cf6fb992d80d2cc4f5918b43d5a722003" Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.743624 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.822345 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-scripts\") pod \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.822405 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-sg-core-conf-yaml\") pod \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.822597 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50108a39-fee9-46bc-a8f0-6c250e5fb27e-log-httpd\") pod \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.822725 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-config-data\") pod \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.822761 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50108a39-fee9-46bc-a8f0-6c250e5fb27e-run-httpd\") pod \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.822804 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-combined-ca-bundle\") pod \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.822863 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbwf6\" (UniqueName: \"kubernetes.io/projected/50108a39-fee9-46bc-a8f0-6c250e5fb27e-kube-api-access-fbwf6\") pod \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\" (UID: \"50108a39-fee9-46bc-a8f0-6c250e5fb27e\") " Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.823355 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50108a39-fee9-46bc-a8f0-6c250e5fb27e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "50108a39-fee9-46bc-a8f0-6c250e5fb27e" (UID: "50108a39-fee9-46bc-a8f0-6c250e5fb27e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.823952 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50108a39-fee9-46bc-a8f0-6c250e5fb27e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "50108a39-fee9-46bc-a8f0-6c250e5fb27e" (UID: "50108a39-fee9-46bc-a8f0-6c250e5fb27e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.839366 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.840989 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-bb74b7cf9-wlnq6" Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.865473 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50108a39-fee9-46bc-a8f0-6c250e5fb27e-kube-api-access-fbwf6" (OuterVolumeSpecName: "kube-api-access-fbwf6") pod "50108a39-fee9-46bc-a8f0-6c250e5fb27e" (UID: "50108a39-fee9-46bc-a8f0-6c250e5fb27e"). InnerVolumeSpecName "kube-api-access-fbwf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.890072 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-scripts" (OuterVolumeSpecName: "scripts") pod "50108a39-fee9-46bc-a8f0-6c250e5fb27e" (UID: "50108a39-fee9-46bc-a8f0-6c250e5fb27e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.926778 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.926972 4835 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50108a39-fee9-46bc-a8f0-6c250e5fb27e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.927030 4835 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/50108a39-fee9-46bc-a8f0-6c250e5fb27e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:55 crc kubenswrapper[4835]: I0216 15:26:55.927117 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbwf6\" (UniqueName: \"kubernetes.io/projected/50108a39-fee9-46bc-a8f0-6c250e5fb27e-kube-api-access-fbwf6\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.018570 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "50108a39-fee9-46bc-a8f0-6c250e5fb27e" (UID: "50108a39-fee9-46bc-a8f0-6c250e5fb27e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.029073 4835 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.108125 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50108a39-fee9-46bc-a8f0-6c250e5fb27e" (UID: "50108a39-fee9-46bc-a8f0-6c250e5fb27e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.130648 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.142711 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-config-data" (OuterVolumeSpecName: "config-data") pod "50108a39-fee9-46bc-a8f0-6c250e5fb27e" (UID: "50108a39-fee9-46bc-a8f0-6c250e5fb27e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.233462 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50108a39-fee9-46bc-a8f0-6c250e5fb27e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.251090 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.257684 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.276088 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7e91-account-create-update-rv2sg"] Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.308604 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-l5r6k"] Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.443390 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-scripts\") pod \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.443457 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-combined-ca-bundle\") pod \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.443508 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-combined-ca-bundle\") pod \"00c16d9d-7ff4-4112-a586-11c72b643cd5\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.443540 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-config-data-custom\") pod \"00c16d9d-7ff4-4112-a586-11c72b643cd5\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.443572 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/090f6dde-5b4b-4154-8123-6e4ba3d0e295-logs\") pod \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.443592 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00c16d9d-7ff4-4112-a586-11c72b643cd5-etc-machine-id\") pod \"00c16d9d-7ff4-4112-a586-11c72b643cd5\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.443613 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c16d9d-7ff4-4112-a586-11c72b643cd5-logs\") pod \"00c16d9d-7ff4-4112-a586-11c72b643cd5\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.443630 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-internal-tls-certs\") pod \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.444501 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/090f6dde-5b4b-4154-8123-6e4ba3d0e295-httpd-run\") pod \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.444957 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") pod \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.444994 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjh5w\" (UniqueName: \"kubernetes.io/projected/00c16d9d-7ff4-4112-a586-11c72b643cd5-kube-api-access-gjh5w\") pod \"00c16d9d-7ff4-4112-a586-11c72b643cd5\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.445032 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dhh2\" (UniqueName: \"kubernetes.io/projected/090f6dde-5b4b-4154-8123-6e4ba3d0e295-kube-api-access-2dhh2\") pod \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.445066 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-config-data\") pod \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\" (UID: \"090f6dde-5b4b-4154-8123-6e4ba3d0e295\") " Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.445109 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-scripts\") pod \"00c16d9d-7ff4-4112-a586-11c72b643cd5\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.445137 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-config-data\") pod \"00c16d9d-7ff4-4112-a586-11c72b643cd5\" (UID: \"00c16d9d-7ff4-4112-a586-11c72b643cd5\") " Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.444753 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/090f6dde-5b4b-4154-8123-6e4ba3d0e295-logs" (OuterVolumeSpecName: "logs") pod "090f6dde-5b4b-4154-8123-6e4ba3d0e295" (UID: "090f6dde-5b4b-4154-8123-6e4ba3d0e295"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.446896 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/090f6dde-5b4b-4154-8123-6e4ba3d0e295-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "090f6dde-5b4b-4154-8123-6e4ba3d0e295" (UID: "090f6dde-5b4b-4154-8123-6e4ba3d0e295"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.448003 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-scripts" (OuterVolumeSpecName: "scripts") pod "090f6dde-5b4b-4154-8123-6e4ba3d0e295" (UID: "090f6dde-5b4b-4154-8123-6e4ba3d0e295"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.458734 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00c16d9d-7ff4-4112-a586-11c72b643cd5-logs" (OuterVolumeSpecName: "logs") pod "00c16d9d-7ff4-4112-a586-11c72b643cd5" (UID: "00c16d9d-7ff4-4112-a586-11c72b643cd5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.458790 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00c16d9d-7ff4-4112-a586-11c72b643cd5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "00c16d9d-7ff4-4112-a586-11c72b643cd5" (UID: "00c16d9d-7ff4-4112-a586-11c72b643cd5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.466792 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c16d9d-7ff4-4112-a586-11c72b643cd5-kube-api-access-gjh5w" (OuterVolumeSpecName: "kube-api-access-gjh5w") pod "00c16d9d-7ff4-4112-a586-11c72b643cd5" (UID: "00c16d9d-7ff4-4112-a586-11c72b643cd5"). InnerVolumeSpecName "kube-api-access-gjh5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.469825 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/090f6dde-5b4b-4154-8123-6e4ba3d0e295-kube-api-access-2dhh2" (OuterVolumeSpecName: "kube-api-access-2dhh2") pod "090f6dde-5b4b-4154-8123-6e4ba3d0e295" (UID: "090f6dde-5b4b-4154-8123-6e4ba3d0e295"). InnerVolumeSpecName "kube-api-access-2dhh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.480693 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-scripts" (OuterVolumeSpecName: "scripts") pod "00c16d9d-7ff4-4112-a586-11c72b643cd5" (UID: "00c16d9d-7ff4-4112-a586-11c72b643cd5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.532662 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "00c16d9d-7ff4-4112-a586-11c72b643cd5" (UID: "00c16d9d-7ff4-4112-a586-11c72b643cd5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.533832 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1" (OuterVolumeSpecName: "glance") pod "090f6dde-5b4b-4154-8123-6e4ba3d0e295" (UID: "090f6dde-5b4b-4154-8123-6e4ba3d0e295"). InnerVolumeSpecName "pvc-ec154127-1c50-4606-8308-de3489ef25c1". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.553991 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/090f6dde-5b4b-4154-8123-6e4ba3d0e295-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.554018 4835 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00c16d9d-7ff4-4112-a586-11c72b643cd5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.554028 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00c16d9d-7ff4-4112-a586-11c72b643cd5-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.554037 4835 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/090f6dde-5b4b-4154-8123-6e4ba3d0e295-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.554066 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ec154127-1c50-4606-8308-de3489ef25c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") on node \"crc\" " Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.554081 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjh5w\" (UniqueName: \"kubernetes.io/projected/00c16d9d-7ff4-4112-a586-11c72b643cd5-kube-api-access-gjh5w\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.554090 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dhh2\" (UniqueName: \"kubernetes.io/projected/090f6dde-5b4b-4154-8123-6e4ba3d0e295-kube-api-access-2dhh2\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.554100 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.554108 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.554116 4835 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.602141 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00c16d9d-7ff4-4112-a586-11c72b643cd5" (UID: "00c16d9d-7ff4-4112-a586-11c72b643cd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.609287 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-config-data" (OuterVolumeSpecName: "config-data") pod "090f6dde-5b4b-4154-8123-6e4ba3d0e295" (UID: "090f6dde-5b4b-4154-8123-6e4ba3d0e295"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.629644 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "090f6dde-5b4b-4154-8123-6e4ba3d0e295" (UID: "090f6dde-5b4b-4154-8123-6e4ba3d0e295"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.660295 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.660325 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.660337 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.660329 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "090f6dde-5b4b-4154-8123-6e4ba3d0e295" (UID: "090f6dde-5b4b-4154-8123-6e4ba3d0e295"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.676156 4835 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.676302 4835 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ec154127-1c50-4606-8308-de3489ef25c1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1") on node "crc" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.689448 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"090f6dde-5b4b-4154-8123-6e4ba3d0e295","Type":"ContainerDied","Data":"8c16bea08517f3b5c110b5f28cd37921f8b55318249deb25a4212e9a72f23375"} Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.689497 4835 scope.go:117] "RemoveContainer" containerID="d056f12806ae54de0097c6e6e0ee6ca75c285056e5e1d04e50fe02e2259fe295" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.689613 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.711460 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-config-data" (OuterVolumeSpecName: "config-data") pod "00c16d9d-7ff4-4112-a586-11c72b643cd5" (UID: "00c16d9d-7ff4-4112-a586-11c72b643cd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.721776 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7e91-account-create-update-rv2sg" event={"ID":"02c74289-acd3-4046-9eb5-e8668093107a","Type":"ContainerStarted","Data":"07c838aa91a1c5f6dd51e9df5ff8cc6c7e479d59f07344061c85b38d4d7c497a"} Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.737754 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-l5r6k" event={"ID":"4b8f4549-89bc-42f8-9c99-64f495486dc9","Type":"ContainerStarted","Data":"4e8d5c8c064137f90d476c8704e5c9ef839b73b91efffd440a5cafa5730d1cfb"} Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.739094 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a0764574-e9ce-46bd-9cf5-7aefa9b455db","Type":"ContainerStarted","Data":"e3df36d6313c2520d2d62c66c26726646462b1dd78e2c7a2b3cea3a0be02c2d5"} Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.762160 4835 reconciler_common.go:293] "Volume detached for volume \"pvc-ec154127-1c50-4606-8308-de3489ef25c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.762265 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c16d9d-7ff4-4112-a586-11c72b643cd5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.767802 4835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/090f6dde-5b4b-4154-8123-6e4ba3d0e295-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.771868 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"00c16d9d-7ff4-4112-a586-11c72b643cd5","Type":"ContainerDied","Data":"d249a5939e2c642c01f54531f472fc4e1e4d24ecfd3591bdb6f8d6a60564a0c4"} Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.771974 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.783976 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.794265 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.804702 4835 scope.go:117] "RemoveContainer" containerID="0f57339025b6b66ed34b035c9d8e04adda9f38a5c30eabde613fb2f3facca854" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.805731 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerStarted","Data":"eb539ff3d97049cb7ff841e79b175fa7a23e4c2b3f278dee053bc66e237d104c"} Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.828478 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.847634 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:26:56 crc kubenswrapper[4835]: E0216 15:26:56.848116 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c16d9d-7ff4-4112-a586-11c72b643cd5" containerName="cinder-api-log" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.848133 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c16d9d-7ff4-4112-a586-11c72b643cd5" containerName="cinder-api-log" Feb 16 15:26:56 crc kubenswrapper[4835]: E0216 15:26:56.848145 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="proxy-httpd" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.848152 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="proxy-httpd" Feb 16 15:26:56 crc kubenswrapper[4835]: E0216 15:26:56.848169 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090f6dde-5b4b-4154-8123-6e4ba3d0e295" containerName="glance-httpd" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.848175 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="090f6dde-5b4b-4154-8123-6e4ba3d0e295" containerName="glance-httpd" Feb 16 15:26:56 crc kubenswrapper[4835]: E0216 15:26:56.848189 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="ceilometer-notification-agent" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.848195 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="ceilometer-notification-agent" Feb 16 15:26:56 crc kubenswrapper[4835]: E0216 15:26:56.848201 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090f6dde-5b4b-4154-8123-6e4ba3d0e295" containerName="glance-log" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.848206 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="090f6dde-5b4b-4154-8123-6e4ba3d0e295" containerName="glance-log" Feb 16 15:26:56 crc kubenswrapper[4835]: E0216 15:26:56.848227 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="sg-core" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.848232 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="sg-core" Feb 16 15:26:56 crc kubenswrapper[4835]: E0216 15:26:56.848251 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c16d9d-7ff4-4112-a586-11c72b643cd5" containerName="cinder-api" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.848257 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c16d9d-7ff4-4112-a586-11c72b643cd5" containerName="cinder-api" Feb 16 15:26:56 crc kubenswrapper[4835]: E0216 15:26:56.848269 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="ceilometer-central-agent" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.848275 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="ceilometer-central-agent" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.848438 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="ceilometer-central-agent" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.848450 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="090f6dde-5b4b-4154-8123-6e4ba3d0e295" containerName="glance-log" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.848459 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="00c16d9d-7ff4-4112-a586-11c72b643cd5" containerName="cinder-api-log" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.848468 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="ceilometer-notification-agent" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.848475 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="00c16d9d-7ff4-4112-a586-11c72b643cd5" containerName="cinder-api" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.848485 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="sg-core" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.848495 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="090f6dde-5b4b-4154-8123-6e4ba3d0e295" containerName="glance-httpd" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.848506 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" containerName="proxy-httpd" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.849613 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.851999 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.854986 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.868285 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.683629009 podStartE2EDuration="14.868266547s" podCreationTimestamp="2026-02-16 15:26:42 +0000 UTC" firstStartedPulling="2026-02-16 15:26:43.089584511 +0000 UTC m=+1152.381577396" lastFinishedPulling="2026-02-16 15:26:55.274222039 +0000 UTC m=+1164.566214934" observedRunningTime="2026-02-16 15:26:56.772373661 +0000 UTC m=+1166.064366556" watchObservedRunningTime="2026-02-16 15:26:56.868266547 +0000 UTC m=+1166.160259442" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.877823 4835 scope.go:117] "RemoveContainer" containerID="944468c04ff762fe324503254045979a3d56eff0887c6541397ac7ab3142bdcb" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.881579 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.896325 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.901518 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9xl7p"] Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.931738 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qgglt"] Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.976602 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ca02-account-create-update-cwxqj"] Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.977982 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05d3637-75bb-4e72-85d0-d130d949a503-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.978026 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxncv\" (UniqueName: \"kubernetes.io/projected/b05d3637-75bb-4e72-85d0-d130d949a503-kube-api-access-pxncv\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.978073 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ec154127-1c50-4606-8308-de3489ef25c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.978098 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b05d3637-75bb-4e72-85d0-d130d949a503-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.978115 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05d3637-75bb-4e72-85d0-d130d949a503-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.978202 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05d3637-75bb-4e72-85d0-d130d949a503-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.978244 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05d3637-75bb-4e72-85d0-d130d949a503-logs\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:56 crc kubenswrapper[4835]: I0216 15:26:56.978337 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b05d3637-75bb-4e72-85d0-d130d949a503-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.020991 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cbd6-account-create-update-wl4lp"] Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.070725 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.082858 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") pod \"8c382add-fd25-4394-a43a-b4992607986b\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.082992 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-combined-ca-bundle\") pod \"8c382add-fd25-4394-a43a-b4992607986b\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.083110 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-config-data\") pod \"8c382add-fd25-4394-a43a-b4992607986b\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.083163 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c382add-fd25-4394-a43a-b4992607986b-logs\") pod \"8c382add-fd25-4394-a43a-b4992607986b\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.083224 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfbjf\" (UniqueName: \"kubernetes.io/projected/8c382add-fd25-4394-a43a-b4992607986b-kube-api-access-tfbjf\") pod \"8c382add-fd25-4394-a43a-b4992607986b\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.083275 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-scripts\") pod \"8c382add-fd25-4394-a43a-b4992607986b\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.083324 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-public-tls-certs\") pod \"8c382add-fd25-4394-a43a-b4992607986b\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.083394 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c382add-fd25-4394-a43a-b4992607986b-httpd-run\") pod \"8c382add-fd25-4394-a43a-b4992607986b\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") " Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.089677 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05d3637-75bb-4e72-85d0-d130d949a503-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.089737 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxncv\" (UniqueName: \"kubernetes.io/projected/b05d3637-75bb-4e72-85d0-d130d949a503-kube-api-access-pxncv\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.089841 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ec154127-1c50-4606-8308-de3489ef25c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.089881 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b05d3637-75bb-4e72-85d0-d130d949a503-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.089911 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05d3637-75bb-4e72-85d0-d130d949a503-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.090085 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05d3637-75bb-4e72-85d0-d130d949a503-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.090185 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05d3637-75bb-4e72-85d0-d130d949a503-logs\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.090315 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b05d3637-75bb-4e72-85d0-d130d949a503-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.090788 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c382add-fd25-4394-a43a-b4992607986b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8c382add-fd25-4394-a43a-b4992607986b" (UID: "8c382add-fd25-4394-a43a-b4992607986b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.090935 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b05d3637-75bb-4e72-85d0-d130d949a503-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.091029 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c382add-fd25-4394-a43a-b4992607986b-logs" (OuterVolumeSpecName: "logs") pod "8c382add-fd25-4394-a43a-b4992607986b" (UID: "8c382add-fd25-4394-a43a-b4992607986b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.108611 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.113982 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05d3637-75bb-4e72-85d0-d130d949a503-logs\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.132674 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05d3637-75bb-4e72-85d0-d130d949a503-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.143611 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b05d3637-75bb-4e72-85d0-d130d949a503-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.155627 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.168615 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05d3637-75bb-4e72-85d0-d130d949a503-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.169303 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxncv\" (UniqueName: \"kubernetes.io/projected/b05d3637-75bb-4e72-85d0-d130d949a503-kube-api-access-pxncv\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.173686 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-scripts" (OuterVolumeSpecName: "scripts") pod "8c382add-fd25-4394-a43a-b4992607986b" (UID: "8c382add-fd25-4394-a43a-b4992607986b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.173707 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c382add-fd25-4394-a43a-b4992607986b-kube-api-access-tfbjf" (OuterVolumeSpecName: "kube-api-access-tfbjf") pod "8c382add-fd25-4394-a43a-b4992607986b" (UID: "8c382add-fd25-4394-a43a-b4992607986b"). InnerVolumeSpecName "kube-api-access-tfbjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.174040 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.174145 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ec154127-1c50-4606-8308-de3489ef25c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ae9ad079290f3aff53310ed8b2991a4590ed9ba10dc75692266b91e47118349/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.191162 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05d3637-75bb-4e72-85d0-d130d949a503-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.196011 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400" (OuterVolumeSpecName: "glance") pod "8c382add-fd25-4394-a43a-b4992607986b" (UID: "8c382add-fd25-4394-a43a-b4992607986b"). InnerVolumeSpecName "pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:26:57 crc kubenswrapper[4835]: E0216 15:26:57.196469 4835 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") pod \"8c382add-fd25-4394-a43a-b4992607986b\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") : UnmountVolume.NewUnmounter failed for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") pod \"8c382add-fd25-4394-a43a-b4992607986b\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/8c382add-fd25-4394-a43a-b4992607986b/volumes/kubernetes.io~csi/pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/8c382add-fd25-4394-a43a-b4992607986b/volumes/kubernetes.io~csi/pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400/vol_data.json]: open /var/lib/kubelet/pods/8c382add-fd25-4394-a43a-b4992607986b/volumes/kubernetes.io~csi/pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") pod \"8c382add-fd25-4394-a43a-b4992607986b\" (UID: \"8c382add-fd25-4394-a43a-b4992607986b\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/8c382add-fd25-4394-a43a-b4992607986b/volumes/kubernetes.io~csi/pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/8c382add-fd25-4394-a43a-b4992607986b/volumes/kubernetes.io~csi/pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400/vol_data.json]: open /var/lib/kubelet/pods/8c382add-fd25-4394-a43a-b4992607986b/volumes/kubernetes.io~csi/pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400/vol_data.json: no such file or directory" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.197637 4835 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c382add-fd25-4394-a43a-b4992607986b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.197667 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") on node \"crc\" " Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.197754 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c382add-fd25-4394-a43a-b4992607986b-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.197802 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfbjf\" (UniqueName: \"kubernetes.io/projected/8c382add-fd25-4394-a43a-b4992607986b-kube-api-access-tfbjf\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.197814 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.206017 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.224976 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:26:57 crc kubenswrapper[4835]: E0216 15:26:57.225857 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c382add-fd25-4394-a43a-b4992607986b" containerName="glance-httpd" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.225871 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c382add-fd25-4394-a43a-b4992607986b" containerName="glance-httpd" Feb 16 15:26:57 crc kubenswrapper[4835]: E0216 15:26:57.225900 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c382add-fd25-4394-a43a-b4992607986b" containerName="glance-log" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.225907 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c382add-fd25-4394-a43a-b4992607986b" containerName="glance-log" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.226327 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c382add-fd25-4394-a43a-b4992607986b" containerName="glance-httpd" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.226349 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c382add-fd25-4394-a43a-b4992607986b" containerName="glance-log" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.246994 4835 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.247197 4835 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400") on node "crc" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.270192 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.274606 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.274919 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.290464 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.300661 4835 reconciler_common.go:293] "Volume detached for volume \"pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.309041 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ec154127-1c50-4606-8308-de3489ef25c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec154127-1c50-4606-8308-de3489ef25c1\") pod \"glance-default-internal-api-0\" (UID: \"b05d3637-75bb-4e72-85d0-d130d949a503\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.315654 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.318827 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.323401 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.323584 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.325176 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.335260 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.376763 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c382add-fd25-4394-a43a-b4992607986b" (UID: "8c382add-fd25-4394-a43a-b4992607986b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.401902 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-config-data\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.401947 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.401967 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.401989 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-scripts\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.402010 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrs4m\" (UniqueName: \"kubernetes.io/projected/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-kube-api-access-rrs4m\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.402036 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-config-data\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.402082 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.402246 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.402275 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-logs\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.402292 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04726f2-53d7-41d5-8c0b-03e787f7db78-log-httpd\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.402309 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-config-data-custom\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.402328 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9snt\" (UniqueName: \"kubernetes.io/projected/b04726f2-53d7-41d5-8c0b-03e787f7db78-kube-api-access-j9snt\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.402354 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-scripts\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.402368 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.402389 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04726f2-53d7-41d5-8c0b-03e787f7db78-run-httpd\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.402434 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.402483 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.403398 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8c382add-fd25-4394-a43a-b4992607986b" (UID: "8c382add-fd25-4394-a43a-b4992607986b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.430674 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-config-data" (OuterVolumeSpecName: "config-data") pod "8c382add-fd25-4394-a43a-b4992607986b" (UID: "8c382add-fd25-4394-a43a-b4992607986b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.479922 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00c16d9d-7ff4-4112-a586-11c72b643cd5" path="/var/lib/kubelet/pods/00c16d9d-7ff4-4112-a586-11c72b643cd5/volumes" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.487689 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="090f6dde-5b4b-4154-8123-6e4ba3d0e295" path="/var/lib/kubelet/pods/090f6dde-5b4b-4154-8123-6e4ba3d0e295/volumes" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.488576 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50108a39-fee9-46bc-a8f0-6c250e5fb27e" path="/var/lib/kubelet/pods/50108a39-fee9-46bc-a8f0-6c250e5fb27e/volumes" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.504807 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.504960 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-config-data\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.505048 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.505125 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.505201 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-scripts\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.505269 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrs4m\" (UniqueName: \"kubernetes.io/projected/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-kube-api-access-rrs4m\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.505352 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-config-data\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.505505 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.505597 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.505746 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-logs\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.505818 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04726f2-53d7-41d5-8c0b-03e787f7db78-log-httpd\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.505926 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-config-data-custom\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.506025 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9snt\" (UniqueName: \"kubernetes.io/projected/b04726f2-53d7-41d5-8c0b-03e787f7db78-kube-api-access-j9snt\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.506108 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-scripts\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.506174 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.506241 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04726f2-53d7-41d5-8c0b-03e787f7db78-run-httpd\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.506352 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.506416 4835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c382add-fd25-4394-a43a-b4992607986b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.508982 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04726f2-53d7-41d5-8c0b-03e787f7db78-run-httpd\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.510522 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04726f2-53d7-41d5-8c0b-03e787f7db78-log-httpd\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.513277 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-config-data\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.515933 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-logs\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.518049 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.522615 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-scripts\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.524067 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.525011 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-config-data\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.526584 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.526727 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-scripts\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.526858 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.527065 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-config-data-custom\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.534380 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.536011 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.536943 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrs4m\" (UniqueName: \"kubernetes.io/projected/4b239b59-1e9e-41e1-aa08-effb3b5cd78d-kube-api-access-rrs4m\") pod \"cinder-api-0\" (UID: \"4b239b59-1e9e-41e1-aa08-effb3b5cd78d\") " pod="openstack/cinder-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.537499 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9snt\" (UniqueName: \"kubernetes.io/projected/b04726f2-53d7-41d5-8c0b-03e787f7db78-kube-api-access-j9snt\") pod \"ceilometer-0\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " pod="openstack/ceilometer-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.804780 4835 generic.go:334] "Generic (PLEG): container finished" podID="4b8f4549-89bc-42f8-9c99-64f495486dc9" containerID="d9e7d4bbd2543f0d028b8b255a36451abb6ca96905fb28f9324e7ca0dcd88190" exitCode=0 Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.805066 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-l5r6k" event={"ID":"4b8f4549-89bc-42f8-9c99-64f495486dc9","Type":"ContainerDied","Data":"d9e7d4bbd2543f0d028b8b255a36451abb6ca96905fb28f9324e7ca0dcd88190"} Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.808499 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.808505 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c382add-fd25-4394-a43a-b4992607986b","Type":"ContainerDied","Data":"8ebb5564b87c270ceef50396d700c3ad4cbef942b1592322421f0c756b0ea606"} Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.810259 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9xl7p" event={"ID":"b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7","Type":"ContainerStarted","Data":"b091906c9dcbbf4c4745cecb629a458733336750037a398380a1fa21c12413fd"} Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.811238 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ca02-account-create-update-cwxqj" event={"ID":"488a0795-d2d4-4eb0-899a-305faff595d5","Type":"ContainerStarted","Data":"5eb9674e58f4059eb571b20cdbc13e586e7b5ade00999b111e17c6266f8c3d74"} Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.815620 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qgglt" event={"ID":"64b7782b-02cd-48f4-9955-a3e2a698e687","Type":"ContainerStarted","Data":"d10f69646c7b6bcdb461ef7941644bd586aa59f0d83f6a259ebec75a427d440b"} Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.817171 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.820747 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cbd6-account-create-update-wl4lp" event={"ID":"48670262-e3bb-41f9-9c4e-cb1ee3608961","Type":"ContainerStarted","Data":"28206fa967f04cfac6540618b135301368f382eeb6e42a8f0f20da06ca419fb6"} Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.824652 4835 generic.go:334] "Generic (PLEG): container finished" podID="02c74289-acd3-4046-9eb5-e8668093107a" containerID="e1272809359d5bc1d4d558973bdf2b6e9666c62e80f04be8847d592eaad7584a" exitCode=0 Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.824729 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7e91-account-create-update-rv2sg" event={"ID":"02c74289-acd3-4046-9eb5-e8668093107a","Type":"ContainerDied","Data":"e1272809359d5bc1d4d558973bdf2b6e9666c62e80f04be8847d592eaad7584a"} Feb 16 15:26:57 crc kubenswrapper[4835]: I0216 15:26:57.833553 4835 scope.go:117] "RemoveContainer" containerID="5c11d40ab4e8dd2514bf5c730cf5e4ef407679a126f490d0bf76c2dcc054d8e3" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.002869 4835 scope.go:117] "RemoveContainer" containerID="2bbf685772ba686e58a400b5e59844fd03d345772e79c6e5018b55a3734fc7fe" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.010216 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.055475 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.065008 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.071555 4835 scope.go:117] "RemoveContainer" containerID="09eb3c91b9f2c05cfd9345b6506600721f584fdf65ae03217acae2ed909637a4" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.106000 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.119609 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.121488 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.129437 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.129795 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.132633 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.250596 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.250883 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.250904 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr8p5\" (UniqueName: \"kubernetes.io/projected/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-kube-api-access-fr8p5\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.250927 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.250957 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-logs\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.251016 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.251044 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.251062 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.331010 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77567867dc-2fttf" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.353224 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.353286 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.353311 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.353421 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.353441 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.353462 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr8p5\" (UniqueName: \"kubernetes.io/projected/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-kube-api-access-fr8p5\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.353479 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.353507 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-logs\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.354079 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-logs\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.355444 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.362369 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.363281 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.372193 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.373163 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.373202 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/568e09178e07d31d4dfb537786fa6b565f218555c933df30dac8ee385e3f74f7/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.388417 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr8p5\" (UniqueName: \"kubernetes.io/projected/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-kube-api-access-fr8p5\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.393229 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.426582 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5d7bb8ff7b-4thhg"] Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.426963 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5d7bb8ff7b-4thhg" podUID="a64f5816-6cc6-4640-b28b-0f0cfb175d28" containerName="neutron-api" containerID="cri-o://b32cf668cf1ac36ec38f3ff7296b33f8112519a8b366fb61e52ee4f93183c604" gracePeriod=30 Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.427192 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5d7bb8ff7b-4thhg" podUID="a64f5816-6cc6-4640-b28b-0f0cfb175d28" containerName="neutron-httpd" containerID="cri-o://f23072549f6264f285495852eaff84326fab8c4319020a70228414ebff03a7a4" gracePeriod=30 Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.489651 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cc1a24e4-8032-4c96-8ab8-0197601f4400\") pod \"glance-default-external-api-0\" (UID: \"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: W0216 15:26:58.515756 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb05d3637_75bb_4e72_85d0_d130d949a503.slice/crio-f4d94fd5db0de063e4abbf94014884a3f46d93ba96861111260d782170450faa WatchSource:0}: Error finding container f4d94fd5db0de063e4abbf94014884a3f46d93ba96861111260d782170450faa: Status 404 returned error can't find the container with id f4d94fd5db0de063e4abbf94014884a3f46d93ba96861111260d782170450faa Feb 16 15:26:58 crc kubenswrapper[4835]: E0216 15:26:58.535609 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:26:58 crc kubenswrapper[4835]: E0216 15:26:58.535654 4835 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:26:58 crc kubenswrapper[4835]: E0216 15:26:58.535776 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqdtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-sgzmb_openstack(3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:26:58 crc kubenswrapper[4835]: E0216 15:26:58.537624 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.543844 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.622253 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.706762 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.770100 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.849501 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b05d3637-75bb-4e72-85d0-d130d949a503","Type":"ContainerStarted","Data":"f4d94fd5db0de063e4abbf94014884a3f46d93ba96861111260d782170450faa"} Feb 16 15:26:58 crc kubenswrapper[4835]: W0216 15:26:58.851310 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb04726f2_53d7_41d5_8c0b_03e787f7db78.slice/crio-1db66da23ce616d898bb59682d8b3ec392b92ad504671cde0f762982785ff38b WatchSource:0}: Error finding container 1db66da23ce616d898bb59682d8b3ec392b92ad504671cde0f762982785ff38b: Status 404 returned error can't find the container with id 1db66da23ce616d898bb59682d8b3ec392b92ad504671cde0f762982785ff38b Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.854439 4835 generic.go:334] "Generic (PLEG): container finished" podID="48670262-e3bb-41f9-9c4e-cb1ee3608961" containerID="18f3e3d8ecbeba5e03c1133749af2644fe3f10f336ca124414a9a556bcd88109" exitCode=0 Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.854499 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cbd6-account-create-update-wl4lp" event={"ID":"48670262-e3bb-41f9-9c4e-cb1ee3608961","Type":"ContainerDied","Data":"18f3e3d8ecbeba5e03c1133749af2644fe3f10f336ca124414a9a556bcd88109"} Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.866439 4835 generic.go:334] "Generic (PLEG): container finished" podID="b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7" containerID="d703a5dac013d69f616856ad02ed2a18664bc6fe5bfd7c50b1cd6c9e7c56df43" exitCode=0 Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.866491 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9xl7p" event={"ID":"b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7","Type":"ContainerDied","Data":"d703a5dac013d69f616856ad02ed2a18664bc6fe5bfd7c50b1cd6c9e7c56df43"} Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.876401 4835 generic.go:334] "Generic (PLEG): container finished" podID="488a0795-d2d4-4eb0-899a-305faff595d5" containerID="fa3678f5e48fe8de13d6b5690d8261b543d74d12fc1e30aec0d48911b918c832" exitCode=0 Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.876482 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ca02-account-create-update-cwxqj" event={"ID":"488a0795-d2d4-4eb0-899a-305faff595d5","Type":"ContainerDied","Data":"fa3678f5e48fe8de13d6b5690d8261b543d74d12fc1e30aec0d48911b918c832"} Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.880867 4835 generic.go:334] "Generic (PLEG): container finished" podID="64b7782b-02cd-48f4-9955-a3e2a698e687" containerID="a45a9aeaa81dd6d522a12875cc64c9439197ed7ec5abae0ff1c540c0663eba3a" exitCode=0 Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.880964 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qgglt" event={"ID":"64b7782b-02cd-48f4-9955-a3e2a698e687","Type":"ContainerDied","Data":"a45a9aeaa81dd6d522a12875cc64c9439197ed7ec5abae0ff1c540c0663eba3a"} Feb 16 15:26:58 crc kubenswrapper[4835]: I0216 15:26:58.883146 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4b239b59-1e9e-41e1-aa08-effb3b5cd78d","Type":"ContainerStarted","Data":"0fc1ba344653d2f3e3a305c593cd3abb32824557e753c35f66806e4b823a43af"} Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.403963 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c382add-fd25-4394-a43a-b4992607986b" path="/var/lib/kubelet/pods/8c382add-fd25-4394-a43a-b4992607986b/volumes" Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.412095 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l5r6k" Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.493322 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b8f4549-89bc-42f8-9c99-64f495486dc9-operator-scripts\") pod \"4b8f4549-89bc-42f8-9c99-64f495486dc9\" (UID: \"4b8f4549-89bc-42f8-9c99-64f495486dc9\") " Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.493704 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n7ph\" (UniqueName: \"kubernetes.io/projected/4b8f4549-89bc-42f8-9c99-64f495486dc9-kube-api-access-9n7ph\") pod \"4b8f4549-89bc-42f8-9c99-64f495486dc9\" (UID: \"4b8f4549-89bc-42f8-9c99-64f495486dc9\") " Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.496417 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b8f4549-89bc-42f8-9c99-64f495486dc9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b8f4549-89bc-42f8-9c99-64f495486dc9" (UID: "4b8f4549-89bc-42f8-9c99-64f495486dc9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.507879 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b8f4549-89bc-42f8-9c99-64f495486dc9-kube-api-access-9n7ph" (OuterVolumeSpecName: "kube-api-access-9n7ph") pod "4b8f4549-89bc-42f8-9c99-64f495486dc9" (UID: "4b8f4549-89bc-42f8-9c99-64f495486dc9"). InnerVolumeSpecName "kube-api-access-9n7ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.597164 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b8f4549-89bc-42f8-9c99-64f495486dc9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.597207 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n7ph\" (UniqueName: \"kubernetes.io/projected/4b8f4549-89bc-42f8-9c99-64f495486dc9-kube-api-access-9n7ph\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.718419 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7e91-account-create-update-rv2sg" Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.812206 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02c74289-acd3-4046-9eb5-e8668093107a-operator-scripts\") pod \"02c74289-acd3-4046-9eb5-e8668093107a\" (UID: \"02c74289-acd3-4046-9eb5-e8668093107a\") " Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.812402 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp9pb\" (UniqueName: \"kubernetes.io/projected/02c74289-acd3-4046-9eb5-e8668093107a-kube-api-access-wp9pb\") pod \"02c74289-acd3-4046-9eb5-e8668093107a\" (UID: \"02c74289-acd3-4046-9eb5-e8668093107a\") " Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.814306 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c74289-acd3-4046-9eb5-e8668093107a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02c74289-acd3-4046-9eb5-e8668093107a" (UID: "02c74289-acd3-4046-9eb5-e8668093107a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.823120 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c74289-acd3-4046-9eb5-e8668093107a-kube-api-access-wp9pb" (OuterVolumeSpecName: "kube-api-access-wp9pb") pod "02c74289-acd3-4046-9eb5-e8668093107a" (UID: "02c74289-acd3-4046-9eb5-e8668093107a"). InnerVolumeSpecName "kube-api-access-wp9pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.824150 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.912984 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c","Type":"ContainerStarted","Data":"5a3d65a90b5a7f0b75e6b3310f22812d41e53e2c3565f4815c4ec73147d307d3"} Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.914315 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02c74289-acd3-4046-9eb5-e8668093107a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.914337 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp9pb\" (UniqueName: \"kubernetes.io/projected/02c74289-acd3-4046-9eb5-e8668093107a-kube-api-access-wp9pb\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.917335 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b05d3637-75bb-4e72-85d0-d130d949a503","Type":"ContainerStarted","Data":"c7cc674afb055fa3ec2af529560520b8d89789b3f7d7b451afbde5fd7e972971"} Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.921698 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7e91-account-create-update-rv2sg" event={"ID":"02c74289-acd3-4046-9eb5-e8668093107a","Type":"ContainerDied","Data":"07c838aa91a1c5f6dd51e9df5ff8cc6c7e479d59f07344061c85b38d4d7c497a"} Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.921737 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07c838aa91a1c5f6dd51e9df5ff8cc6c7e479d59f07344061c85b38d4d7c497a" Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.921792 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7e91-account-create-update-rv2sg" Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.928923 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-l5r6k" event={"ID":"4b8f4549-89bc-42f8-9c99-64f495486dc9","Type":"ContainerDied","Data":"4e8d5c8c064137f90d476c8704e5c9ef839b73b91efffd440a5cafa5730d1cfb"} Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.928959 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e8d5c8c064137f90d476c8704e5c9ef839b73b91efffd440a5cafa5730d1cfb" Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.928934 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l5r6k" Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.931265 4835 generic.go:334] "Generic (PLEG): container finished" podID="a64f5816-6cc6-4640-b28b-0f0cfb175d28" containerID="f23072549f6264f285495852eaff84326fab8c4319020a70228414ebff03a7a4" exitCode=0 Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.931318 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d7bb8ff7b-4thhg" event={"ID":"a64f5816-6cc6-4640-b28b-0f0cfb175d28","Type":"ContainerDied","Data":"f23072549f6264f285495852eaff84326fab8c4319020a70228414ebff03a7a4"} Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.934463 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4b239b59-1e9e-41e1-aa08-effb3b5cd78d","Type":"ContainerStarted","Data":"7cc185b4c96268a8f8eb0e1b70425e985745f59197b766c8b8005084be20f57f"} Feb 16 15:26:59 crc kubenswrapper[4835]: I0216 15:26:59.946979 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04726f2-53d7-41d5-8c0b-03e787f7db78","Type":"ContainerStarted","Data":"1db66da23ce616d898bb59682d8b3ec392b92ad504671cde0f762982785ff38b"} Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.558745 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cbd6-account-create-update-wl4lp" Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.660371 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48670262-e3bb-41f9-9c4e-cb1ee3608961-operator-scripts\") pod \"48670262-e3bb-41f9-9c4e-cb1ee3608961\" (UID: \"48670262-e3bb-41f9-9c4e-cb1ee3608961\") " Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.660444 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m24hd\" (UniqueName: \"kubernetes.io/projected/48670262-e3bb-41f9-9c4e-cb1ee3608961-kube-api-access-m24hd\") pod \"48670262-e3bb-41f9-9c4e-cb1ee3608961\" (UID: \"48670262-e3bb-41f9-9c4e-cb1ee3608961\") " Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.662042 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48670262-e3bb-41f9-9c4e-cb1ee3608961-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48670262-e3bb-41f9-9c4e-cb1ee3608961" (UID: "48670262-e3bb-41f9-9c4e-cb1ee3608961"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.674512 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48670262-e3bb-41f9-9c4e-cb1ee3608961-kube-api-access-m24hd" (OuterVolumeSpecName: "kube-api-access-m24hd") pod "48670262-e3bb-41f9-9c4e-cb1ee3608961" (UID: "48670262-e3bb-41f9-9c4e-cb1ee3608961"). InnerVolumeSpecName "kube-api-access-m24hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.763915 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48670262-e3bb-41f9-9c4e-cb1ee3608961-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.764245 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m24hd\" (UniqueName: \"kubernetes.io/projected/48670262-e3bb-41f9-9c4e-cb1ee3608961-kube-api-access-m24hd\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.870196 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ca02-account-create-update-cwxqj" Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.883397 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9xl7p" Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.939155 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qgglt" Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.967575 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/488a0795-d2d4-4eb0-899a-305faff595d5-operator-scripts\") pod \"488a0795-d2d4-4eb0-899a-305faff595d5\" (UID: \"488a0795-d2d4-4eb0-899a-305faff595d5\") " Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.967803 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f4zp\" (UniqueName: \"kubernetes.io/projected/488a0795-d2d4-4eb0-899a-305faff595d5-kube-api-access-7f4zp\") pod \"488a0795-d2d4-4eb0-899a-305faff595d5\" (UID: \"488a0795-d2d4-4eb0-899a-305faff595d5\") " Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.970221 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/488a0795-d2d4-4eb0-899a-305faff595d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "488a0795-d2d4-4eb0-899a-305faff595d5" (UID: "488a0795-d2d4-4eb0-899a-305faff595d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.983612 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/488a0795-d2d4-4eb0-899a-305faff595d5-kube-api-access-7f4zp" (OuterVolumeSpecName: "kube-api-access-7f4zp") pod "488a0795-d2d4-4eb0-899a-305faff595d5" (UID: "488a0795-d2d4-4eb0-899a-305faff595d5"). InnerVolumeSpecName "kube-api-access-7f4zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.985574 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9xl7p" event={"ID":"b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7","Type":"ContainerDied","Data":"b091906c9dcbbf4c4745cecb629a458733336750037a398380a1fa21c12413fd"} Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.985605 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b091906c9dcbbf4c4745cecb629a458733336750037a398380a1fa21c12413fd" Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.985669 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9xl7p" Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.988835 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ca02-account-create-update-cwxqj" event={"ID":"488a0795-d2d4-4eb0-899a-305faff595d5","Type":"ContainerDied","Data":"5eb9674e58f4059eb571b20cdbc13e586e7b5ade00999b111e17c6266f8c3d74"} Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.988858 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eb9674e58f4059eb571b20cdbc13e586e7b5ade00999b111e17c6266f8c3d74" Feb 16 15:27:00 crc kubenswrapper[4835]: I0216 15:27:00.988897 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ca02-account-create-update-cwxqj" Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:00.999941 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qgglt" event={"ID":"64b7782b-02cd-48f4-9955-a3e2a698e687","Type":"ContainerDied","Data":"d10f69646c7b6bcdb461ef7941644bd586aa59f0d83f6a259ebec75a427d440b"} Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:00.999978 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d10f69646c7b6bcdb461ef7941644bd586aa59f0d83f6a259ebec75a427d440b" Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.000038 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qgglt" Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.004230 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04726f2-53d7-41d5-8c0b-03e787f7db78","Type":"ContainerStarted","Data":"d87e31b9944a541dba1c0749e75a30f7fc0d6fdfd0603464e8460dfc3da559ab"} Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.009630 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b05d3637-75bb-4e72-85d0-d130d949a503","Type":"ContainerStarted","Data":"f47fac5058dc020fd13acc4f414843bcb3975d6c1dec5371bfdb88a065e1feb5"} Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.011558 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cbd6-account-create-update-wl4lp" event={"ID":"48670262-e3bb-41f9-9c4e-cb1ee3608961","Type":"ContainerDied","Data":"28206fa967f04cfac6540618b135301368f382eeb6e42a8f0f20da06ca419fb6"} Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.011581 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28206fa967f04cfac6540618b135301368f382eeb6e42a8f0f20da06ca419fb6" Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.011628 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cbd6-account-create-update-wl4lp" Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.037761 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.037741695 podStartE2EDuration="5.037741695s" podCreationTimestamp="2026-02-16 15:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:27:01.029304576 +0000 UTC m=+1170.321297491" watchObservedRunningTime="2026-02-16 15:27:01.037741695 +0000 UTC m=+1170.329734590" Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.069711 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qk55\" (UniqueName: \"kubernetes.io/projected/64b7782b-02cd-48f4-9955-a3e2a698e687-kube-api-access-2qk55\") pod \"64b7782b-02cd-48f4-9955-a3e2a698e687\" (UID: \"64b7782b-02cd-48f4-9955-a3e2a698e687\") " Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.069773 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b7782b-02cd-48f4-9955-a3e2a698e687-operator-scripts\") pod \"64b7782b-02cd-48f4-9955-a3e2a698e687\" (UID: \"64b7782b-02cd-48f4-9955-a3e2a698e687\") " Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.069823 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7-operator-scripts\") pod \"b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7\" (UID: \"b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7\") " Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.069938 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c5bb\" (UniqueName: \"kubernetes.io/projected/b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7-kube-api-access-8c5bb\") pod \"b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7\" (UID: \"b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7\") " Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.070493 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f4zp\" (UniqueName: \"kubernetes.io/projected/488a0795-d2d4-4eb0-899a-305faff595d5-kube-api-access-7f4zp\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.070513 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/488a0795-d2d4-4eb0-899a-305faff595d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.072163 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7" (UID: "b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.072872 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64b7782b-02cd-48f4-9955-a3e2a698e687-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64b7782b-02cd-48f4-9955-a3e2a698e687" (UID: "64b7782b-02cd-48f4-9955-a3e2a698e687"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.074459 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64b7782b-02cd-48f4-9955-a3e2a698e687-kube-api-access-2qk55" (OuterVolumeSpecName: "kube-api-access-2qk55") pod "64b7782b-02cd-48f4-9955-a3e2a698e687" (UID: "64b7782b-02cd-48f4-9955-a3e2a698e687"). InnerVolumeSpecName "kube-api-access-2qk55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.076798 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7-kube-api-access-8c5bb" (OuterVolumeSpecName: "kube-api-access-8c5bb") pod "b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7" (UID: "b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7"). InnerVolumeSpecName "kube-api-access-8c5bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.173279 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qk55\" (UniqueName: \"kubernetes.io/projected/64b7782b-02cd-48f4-9955-a3e2a698e687-kube-api-access-2qk55\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.173312 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64b7782b-02cd-48f4-9955-a3e2a698e687-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.173325 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.173335 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c5bb\" (UniqueName: \"kubernetes.io/projected/b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7-kube-api-access-8c5bb\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:01 crc kubenswrapper[4835]: I0216 15:27:01.468330 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.024509 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c","Type":"ContainerStarted","Data":"e0341ff3a57b2cefb5302f8e231a851ce45b03162aee0f9b032e5f48d48ad0e0"} Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.025094 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c","Type":"ContainerStarted","Data":"17ab0efc5f4d44c1b7952ad16c2980fd921249af14958a664cb65ee45554217a"} Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.029159 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4b239b59-1e9e-41e1-aa08-effb3b5cd78d","Type":"ContainerStarted","Data":"3084da82b45cf0b9d4ca3ab2fd43703de1452ae21999897797b5299e110dd604"} Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.029574 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.031663 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04726f2-53d7-41d5-8c0b-03e787f7db78","Type":"ContainerStarted","Data":"4d990bb86b9c6b5e53336c39d6cef69e672e64d7f3a0283ee983ca2e1f3314de"} Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.031693 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04726f2-53d7-41d5-8c0b-03e787f7db78","Type":"ContainerStarted","Data":"8f86eef6f9369fa2567c0db65ed85f84c245db39136c258fba67a9f6e8689729"} Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.053827 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.053810838 podStartE2EDuration="4.053810838s" podCreationTimestamp="2026-02-16 15:26:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:27:02.049072054 +0000 UTC m=+1171.341064959" watchObservedRunningTime="2026-02-16 15:27:02.053810838 +0000 UTC m=+1171.345803733" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.070163 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.070142403 podStartE2EDuration="6.070142403s" podCreationTimestamp="2026-02-16 15:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:27:02.064857485 +0000 UTC m=+1171.356850380" watchObservedRunningTime="2026-02-16 15:27:02.070142403 +0000 UTC m=+1171.362135298" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.918056 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fbln5"] Feb 16 15:27:02 crc kubenswrapper[4835]: E0216 15:27:02.918470 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="488a0795-d2d4-4eb0-899a-305faff595d5" containerName="mariadb-account-create-update" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.918485 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="488a0795-d2d4-4eb0-899a-305faff595d5" containerName="mariadb-account-create-update" Feb 16 15:27:02 crc kubenswrapper[4835]: E0216 15:27:02.918499 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7" containerName="mariadb-database-create" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.918506 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7" containerName="mariadb-database-create" Feb 16 15:27:02 crc kubenswrapper[4835]: E0216 15:27:02.918524 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b7782b-02cd-48f4-9955-a3e2a698e687" containerName="mariadb-database-create" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.918547 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b7782b-02cd-48f4-9955-a3e2a698e687" containerName="mariadb-database-create" Feb 16 15:27:02 crc kubenswrapper[4835]: E0216 15:27:02.918558 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48670262-e3bb-41f9-9c4e-cb1ee3608961" containerName="mariadb-account-create-update" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.918564 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="48670262-e3bb-41f9-9c4e-cb1ee3608961" containerName="mariadb-account-create-update" Feb 16 15:27:02 crc kubenswrapper[4835]: E0216 15:27:02.918571 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c74289-acd3-4046-9eb5-e8668093107a" containerName="mariadb-account-create-update" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.918578 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c74289-acd3-4046-9eb5-e8668093107a" containerName="mariadb-account-create-update" Feb 16 15:27:02 crc kubenswrapper[4835]: E0216 15:27:02.918588 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b8f4549-89bc-42f8-9c99-64f495486dc9" containerName="mariadb-database-create" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.918596 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b8f4549-89bc-42f8-9c99-64f495486dc9" containerName="mariadb-database-create" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.918800 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c74289-acd3-4046-9eb5-e8668093107a" containerName="mariadb-account-create-update" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.918816 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7" containerName="mariadb-database-create" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.918826 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="488a0795-d2d4-4eb0-899a-305faff595d5" containerName="mariadb-account-create-update" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.918841 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="48670262-e3bb-41f9-9c4e-cb1ee3608961" containerName="mariadb-account-create-update" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.918855 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b8f4549-89bc-42f8-9c99-64f495486dc9" containerName="mariadb-database-create" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.918868 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="64b7782b-02cd-48f4-9955-a3e2a698e687" containerName="mariadb-database-create" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.919551 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fbln5" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.925431 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.925454 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.925512 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-425fg" Feb 16 15:27:02 crc kubenswrapper[4835]: I0216 15:27:02.933836 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fbln5"] Feb 16 15:27:03 crc kubenswrapper[4835]: I0216 15:27:03.022873 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-scripts\") pod \"nova-cell0-conductor-db-sync-fbln5\" (UID: \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\") " pod="openstack/nova-cell0-conductor-db-sync-fbln5" Feb 16 15:27:03 crc kubenswrapper[4835]: I0216 15:27:03.023171 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-config-data\") pod \"nova-cell0-conductor-db-sync-fbln5\" (UID: \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\") " pod="openstack/nova-cell0-conductor-db-sync-fbln5" Feb 16 15:27:03 crc kubenswrapper[4835]: I0216 15:27:03.023236 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fbln5\" (UID: \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\") " pod="openstack/nova-cell0-conductor-db-sync-fbln5" Feb 16 15:27:03 crc kubenswrapper[4835]: I0216 15:27:03.023260 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzqff\" (UniqueName: \"kubernetes.io/projected/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-kube-api-access-tzqff\") pod \"nova-cell0-conductor-db-sync-fbln5\" (UID: \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\") " pod="openstack/nova-cell0-conductor-db-sync-fbln5" Feb 16 15:27:03 crc kubenswrapper[4835]: I0216 15:27:03.125110 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-scripts\") pod \"nova-cell0-conductor-db-sync-fbln5\" (UID: \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\") " pod="openstack/nova-cell0-conductor-db-sync-fbln5" Feb 16 15:27:03 crc kubenswrapper[4835]: I0216 15:27:03.125164 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-config-data\") pod \"nova-cell0-conductor-db-sync-fbln5\" (UID: \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\") " pod="openstack/nova-cell0-conductor-db-sync-fbln5" Feb 16 15:27:03 crc kubenswrapper[4835]: I0216 15:27:03.125280 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fbln5\" (UID: \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\") " pod="openstack/nova-cell0-conductor-db-sync-fbln5" Feb 16 15:27:03 crc kubenswrapper[4835]: I0216 15:27:03.125306 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzqff\" (UniqueName: \"kubernetes.io/projected/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-kube-api-access-tzqff\") pod \"nova-cell0-conductor-db-sync-fbln5\" (UID: \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\") " pod="openstack/nova-cell0-conductor-db-sync-fbln5" Feb 16 15:27:03 crc kubenswrapper[4835]: I0216 15:27:03.129605 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-scripts\") pod \"nova-cell0-conductor-db-sync-fbln5\" (UID: \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\") " pod="openstack/nova-cell0-conductor-db-sync-fbln5" Feb 16 15:27:03 crc kubenswrapper[4835]: I0216 15:27:03.129682 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-config-data\") pod \"nova-cell0-conductor-db-sync-fbln5\" (UID: \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\") " pod="openstack/nova-cell0-conductor-db-sync-fbln5" Feb 16 15:27:03 crc kubenswrapper[4835]: I0216 15:27:03.133051 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fbln5\" (UID: \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\") " pod="openstack/nova-cell0-conductor-db-sync-fbln5" Feb 16 15:27:03 crc kubenswrapper[4835]: I0216 15:27:03.180047 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzqff\" (UniqueName: \"kubernetes.io/projected/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-kube-api-access-tzqff\") pod \"nova-cell0-conductor-db-sync-fbln5\" (UID: \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\") " pod="openstack/nova-cell0-conductor-db-sync-fbln5" Feb 16 15:27:03 crc kubenswrapper[4835]: I0216 15:27:03.236855 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fbln5" Feb 16 15:27:03 crc kubenswrapper[4835]: I0216 15:27:03.835837 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fbln5"] Feb 16 15:27:03 crc kubenswrapper[4835]: I0216 15:27:03.891460 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:27:03 crc kubenswrapper[4835]: I0216 15:27:03.935001 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:27:03 crc kubenswrapper[4835]: I0216 15:27:03.948502 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-db8d74b8d-b8dp6" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.048623 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8484d6fc46-gc2kd"] Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.048854 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-8484d6fc46-gc2kd" podUID="43efd331-5c78-4ef6-9967-efb83d49f605" containerName="placement-log" containerID="cri-o://68ddc26f2552d702bd4ded6072e6738fd0ac554f91ccb992af69316da25d0bca" gracePeriod=30 Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.048977 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-8484d6fc46-gc2kd" podUID="43efd331-5c78-4ef6-9967-efb83d49f605" containerName="placement-api" containerID="cri-o://a2fb2f10e95c4449c8d9ace8caa8af2482330e3483e040189377e64ea2e7d217" gracePeriod=30 Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.057183 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-config\") pod \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.057325 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-ovndb-tls-certs\") pod \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.057415 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-combined-ca-bundle\") pod \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.057489 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fbqp\" (UniqueName: \"kubernetes.io/projected/a64f5816-6cc6-4640-b28b-0f0cfb175d28-kube-api-access-4fbqp\") pod \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.057584 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-httpd-config\") pod \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\" (UID: \"a64f5816-6cc6-4640-b28b-0f0cfb175d28\") " Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.058500 4835 generic.go:334] "Generic (PLEG): container finished" podID="a64f5816-6cc6-4640-b28b-0f0cfb175d28" containerID="b32cf668cf1ac36ec38f3ff7296b33f8112519a8b366fb61e52ee4f93183c604" exitCode=0 Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.058570 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d7bb8ff7b-4thhg" event={"ID":"a64f5816-6cc6-4640-b28b-0f0cfb175d28","Type":"ContainerDied","Data":"b32cf668cf1ac36ec38f3ff7296b33f8112519a8b366fb61e52ee4f93183c604"} Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.058594 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d7bb8ff7b-4thhg" event={"ID":"a64f5816-6cc6-4640-b28b-0f0cfb175d28","Type":"ContainerDied","Data":"177d3ab6f0c315a20233ac7287f71cf35f341afb8ad59efd6cb1e54967da8924"} Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.058610 4835 scope.go:117] "RemoveContainer" containerID="f23072549f6264f285495852eaff84326fab8c4319020a70228414ebff03a7a4" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.058733 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d7bb8ff7b-4thhg" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.064557 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a64f5816-6cc6-4640-b28b-0f0cfb175d28" (UID: "a64f5816-6cc6-4640-b28b-0f0cfb175d28"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.064916 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a64f5816-6cc6-4640-b28b-0f0cfb175d28-kube-api-access-4fbqp" (OuterVolumeSpecName: "kube-api-access-4fbqp") pod "a64f5816-6cc6-4640-b28b-0f0cfb175d28" (UID: "a64f5816-6cc6-4640-b28b-0f0cfb175d28"). InnerVolumeSpecName "kube-api-access-4fbqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.072434 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fbln5" event={"ID":"df55a8b2-fe66-43ed-9afe-f3c3b6316a51","Type":"ContainerStarted","Data":"668093d7d408c610357f06aa51581434edc637d293b257cd43f2cc33afc3a050"} Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.087577 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerName="ceilometer-central-agent" containerID="cri-o://d87e31b9944a541dba1c0749e75a30f7fc0d6fdfd0603464e8460dfc3da559ab" gracePeriod=30 Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.087742 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04726f2-53d7-41d5-8c0b-03e787f7db78","Type":"ContainerStarted","Data":"2604eeabc27d52e3a38bf09e602afaddc69f1875a94e02bdf471167330515d3f"} Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.088003 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.088302 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerName="ceilometer-notification-agent" containerID="cri-o://8f86eef6f9369fa2567c0db65ed85f84c245db39136c258fba67a9f6e8689729" gracePeriod=30 Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.088329 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerName="proxy-httpd" containerID="cri-o://2604eeabc27d52e3a38bf09e602afaddc69f1875a94e02bdf471167330515d3f" gracePeriod=30 Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.088302 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerName="sg-core" containerID="cri-o://4d990bb86b9c6b5e53336c39d6cef69e672e64d7f3a0283ee983ca2e1f3314de" gracePeriod=30 Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.156686 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.051720247 podStartE2EDuration="8.156666432s" podCreationTimestamp="2026-02-16 15:26:56 +0000 UTC" firstStartedPulling="2026-02-16 15:26:58.871498381 +0000 UTC m=+1168.163491276" lastFinishedPulling="2026-02-16 15:27:02.976444566 +0000 UTC m=+1172.268437461" observedRunningTime="2026-02-16 15:27:04.133375775 +0000 UTC m=+1173.425368680" watchObservedRunningTime="2026-02-16 15:27:04.156666432 +0000 UTC m=+1173.448659317" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.160408 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fbqp\" (UniqueName: \"kubernetes.io/projected/a64f5816-6cc6-4640-b28b-0f0cfb175d28-kube-api-access-4fbqp\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.160442 4835 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.183566 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-config" (OuterVolumeSpecName: "config") pod "a64f5816-6cc6-4640-b28b-0f0cfb175d28" (UID: "a64f5816-6cc6-4640-b28b-0f0cfb175d28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.232623 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a64f5816-6cc6-4640-b28b-0f0cfb175d28" (UID: "a64f5816-6cc6-4640-b28b-0f0cfb175d28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.262550 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.262579 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.287338 4835 scope.go:117] "RemoveContainer" containerID="b32cf668cf1ac36ec38f3ff7296b33f8112519a8b366fb61e52ee4f93183c604" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.297351 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a64f5816-6cc6-4640-b28b-0f0cfb175d28" (UID: "a64f5816-6cc6-4640-b28b-0f0cfb175d28"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.320333 4835 scope.go:117] "RemoveContainer" containerID="f23072549f6264f285495852eaff84326fab8c4319020a70228414ebff03a7a4" Feb 16 15:27:04 crc kubenswrapper[4835]: E0216 15:27:04.320990 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f23072549f6264f285495852eaff84326fab8c4319020a70228414ebff03a7a4\": container with ID starting with f23072549f6264f285495852eaff84326fab8c4319020a70228414ebff03a7a4 not found: ID does not exist" containerID="f23072549f6264f285495852eaff84326fab8c4319020a70228414ebff03a7a4" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.321023 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f23072549f6264f285495852eaff84326fab8c4319020a70228414ebff03a7a4"} err="failed to get container status \"f23072549f6264f285495852eaff84326fab8c4319020a70228414ebff03a7a4\": rpc error: code = NotFound desc = could not find container \"f23072549f6264f285495852eaff84326fab8c4319020a70228414ebff03a7a4\": container with ID starting with f23072549f6264f285495852eaff84326fab8c4319020a70228414ebff03a7a4 not found: ID does not exist" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.321044 4835 scope.go:117] "RemoveContainer" containerID="b32cf668cf1ac36ec38f3ff7296b33f8112519a8b366fb61e52ee4f93183c604" Feb 16 15:27:04 crc kubenswrapper[4835]: E0216 15:27:04.321333 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b32cf668cf1ac36ec38f3ff7296b33f8112519a8b366fb61e52ee4f93183c604\": container with ID starting with b32cf668cf1ac36ec38f3ff7296b33f8112519a8b366fb61e52ee4f93183c604 not found: ID does not exist" containerID="b32cf668cf1ac36ec38f3ff7296b33f8112519a8b366fb61e52ee4f93183c604" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.321348 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b32cf668cf1ac36ec38f3ff7296b33f8112519a8b366fb61e52ee4f93183c604"} err="failed to get container status \"b32cf668cf1ac36ec38f3ff7296b33f8112519a8b366fb61e52ee4f93183c604\": rpc error: code = NotFound desc = could not find container \"b32cf668cf1ac36ec38f3ff7296b33f8112519a8b366fb61e52ee4f93183c604\": container with ID starting with b32cf668cf1ac36ec38f3ff7296b33f8112519a8b366fb61e52ee4f93183c604 not found: ID does not exist" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.364766 4835 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a64f5816-6cc6-4640-b28b-0f0cfb175d28-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.397412 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5d7bb8ff7b-4thhg"] Feb 16 15:27:04 crc kubenswrapper[4835]: I0216 15:27:04.408436 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5d7bb8ff7b-4thhg"] Feb 16 15:27:05 crc kubenswrapper[4835]: I0216 15:27:05.099675 4835 generic.go:334] "Generic (PLEG): container finished" podID="43efd331-5c78-4ef6-9967-efb83d49f605" containerID="68ddc26f2552d702bd4ded6072e6738fd0ac554f91ccb992af69316da25d0bca" exitCode=143 Feb 16 15:27:05 crc kubenswrapper[4835]: I0216 15:27:05.099755 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8484d6fc46-gc2kd" event={"ID":"43efd331-5c78-4ef6-9967-efb83d49f605","Type":"ContainerDied","Data":"68ddc26f2552d702bd4ded6072e6738fd0ac554f91ccb992af69316da25d0bca"} Feb 16 15:27:05 crc kubenswrapper[4835]: I0216 15:27:05.103903 4835 generic.go:334] "Generic (PLEG): container finished" podID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerID="2604eeabc27d52e3a38bf09e602afaddc69f1875a94e02bdf471167330515d3f" exitCode=0 Feb 16 15:27:05 crc kubenswrapper[4835]: I0216 15:27:05.103938 4835 generic.go:334] "Generic (PLEG): container finished" podID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerID="4d990bb86b9c6b5e53336c39d6cef69e672e64d7f3a0283ee983ca2e1f3314de" exitCode=2 Feb 16 15:27:05 crc kubenswrapper[4835]: I0216 15:27:05.103947 4835 generic.go:334] "Generic (PLEG): container finished" podID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerID="8f86eef6f9369fa2567c0db65ed85f84c245db39136c258fba67a9f6e8689729" exitCode=0 Feb 16 15:27:05 crc kubenswrapper[4835]: I0216 15:27:05.103984 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04726f2-53d7-41d5-8c0b-03e787f7db78","Type":"ContainerDied","Data":"2604eeabc27d52e3a38bf09e602afaddc69f1875a94e02bdf471167330515d3f"} Feb 16 15:27:05 crc kubenswrapper[4835]: I0216 15:27:05.104017 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04726f2-53d7-41d5-8c0b-03e787f7db78","Type":"ContainerDied","Data":"4d990bb86b9c6b5e53336c39d6cef69e672e64d7f3a0283ee983ca2e1f3314de"} Feb 16 15:27:05 crc kubenswrapper[4835]: I0216 15:27:05.104032 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04726f2-53d7-41d5-8c0b-03e787f7db78","Type":"ContainerDied","Data":"8f86eef6f9369fa2567c0db65ed85f84c245db39136c258fba67a9f6e8689729"} Feb 16 15:27:05 crc kubenswrapper[4835]: I0216 15:27:05.391684 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a64f5816-6cc6-4640-b28b-0f0cfb175d28" path="/var/lib/kubelet/pods/a64f5816-6cc6-4640-b28b-0f0cfb175d28/volumes" Feb 16 15:27:07 crc kubenswrapper[4835]: I0216 15:27:07.818051 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 15:27:07 crc kubenswrapper[4835]: I0216 15:27:07.818305 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 15:27:07 crc kubenswrapper[4835]: I0216 15:27:07.860223 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 15:27:07 crc kubenswrapper[4835]: I0216 15:27:07.865751 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 15:27:08 crc kubenswrapper[4835]: I0216 15:27:08.137293 4835 generic.go:334] "Generic (PLEG): container finished" podID="43efd331-5c78-4ef6-9967-efb83d49f605" containerID="a2fb2f10e95c4449c8d9ace8caa8af2482330e3483e040189377e64ea2e7d217" exitCode=0 Feb 16 15:27:08 crc kubenswrapper[4835]: I0216 15:27:08.137370 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8484d6fc46-gc2kd" event={"ID":"43efd331-5c78-4ef6-9967-efb83d49f605","Type":"ContainerDied","Data":"a2fb2f10e95c4449c8d9ace8caa8af2482330e3483e040189377e64ea2e7d217"} Feb 16 15:27:08 crc kubenswrapper[4835]: I0216 15:27:08.137960 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 15:27:08 crc kubenswrapper[4835]: I0216 15:27:08.137983 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 15:27:08 crc kubenswrapper[4835]: I0216 15:27:08.774788 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 15:27:08 crc kubenswrapper[4835]: I0216 15:27:08.775074 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 15:27:08 crc kubenswrapper[4835]: I0216 15:27:08.818820 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 15:27:08 crc kubenswrapper[4835]: I0216 15:27:08.827012 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 15:27:09 crc kubenswrapper[4835]: I0216 15:27:09.157085 4835 generic.go:334] "Generic (PLEG): container finished" podID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerID="d87e31b9944a541dba1c0749e75a30f7fc0d6fdfd0603464e8460dfc3da559ab" exitCode=0 Feb 16 15:27:09 crc kubenswrapper[4835]: I0216 15:27:09.158966 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04726f2-53d7-41d5-8c0b-03e787f7db78","Type":"ContainerDied","Data":"d87e31b9944a541dba1c0749e75a30f7fc0d6fdfd0603464e8460dfc3da559ab"} Feb 16 15:27:09 crc kubenswrapper[4835]: I0216 15:27:09.159014 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 15:27:09 crc kubenswrapper[4835]: I0216 15:27:09.159230 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 15:27:10 crc kubenswrapper[4835]: I0216 15:27:10.162755 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 15:27:10 crc kubenswrapper[4835]: I0216 15:27:10.164097 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 15:27:10 crc kubenswrapper[4835]: I0216 15:27:10.436807 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 15:27:11 crc kubenswrapper[4835]: I0216 15:27:11.037171 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 15:27:11 crc kubenswrapper[4835]: I0216 15:27:11.174580 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:27:11 crc kubenswrapper[4835]: I0216 15:27:11.368209 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 15:27:13 crc kubenswrapper[4835]: I0216 15:27:13.778805 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:27:13 crc kubenswrapper[4835]: I0216 15:27:13.816657 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:27:13 crc kubenswrapper[4835]: I0216 15:27:13.904870 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-combined-ca-bundle\") pod \"b04726f2-53d7-41d5-8c0b-03e787f7db78\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " Feb 16 15:27:13 crc kubenswrapper[4835]: I0216 15:27:13.904921 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-sg-core-conf-yaml\") pod \"b04726f2-53d7-41d5-8c0b-03e787f7db78\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " Feb 16 15:27:13 crc kubenswrapper[4835]: I0216 15:27:13.904950 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-config-data\") pod \"b04726f2-53d7-41d5-8c0b-03e787f7db78\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " Feb 16 15:27:13 crc kubenswrapper[4835]: I0216 15:27:13.905085 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04726f2-53d7-41d5-8c0b-03e787f7db78-log-httpd\") pod \"b04726f2-53d7-41d5-8c0b-03e787f7db78\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " Feb 16 15:27:13 crc kubenswrapper[4835]: I0216 15:27:13.905121 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-scripts\") pod \"b04726f2-53d7-41d5-8c0b-03e787f7db78\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " Feb 16 15:27:13 crc kubenswrapper[4835]: I0216 15:27:13.905159 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04726f2-53d7-41d5-8c0b-03e787f7db78-run-httpd\") pod \"b04726f2-53d7-41d5-8c0b-03e787f7db78\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " Feb 16 15:27:13 crc kubenswrapper[4835]: I0216 15:27:13.905180 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9snt\" (UniqueName: \"kubernetes.io/projected/b04726f2-53d7-41d5-8c0b-03e787f7db78-kube-api-access-j9snt\") pod \"b04726f2-53d7-41d5-8c0b-03e787f7db78\" (UID: \"b04726f2-53d7-41d5-8c0b-03e787f7db78\") " Feb 16 15:27:13 crc kubenswrapper[4835]: I0216 15:27:13.908407 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b04726f2-53d7-41d5-8c0b-03e787f7db78-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b04726f2-53d7-41d5-8c0b-03e787f7db78" (UID: "b04726f2-53d7-41d5-8c0b-03e787f7db78"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:27:13 crc kubenswrapper[4835]: I0216 15:27:13.908638 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b04726f2-53d7-41d5-8c0b-03e787f7db78-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b04726f2-53d7-41d5-8c0b-03e787f7db78" (UID: "b04726f2-53d7-41d5-8c0b-03e787f7db78"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:27:13 crc kubenswrapper[4835]: I0216 15:27:13.911487 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b04726f2-53d7-41d5-8c0b-03e787f7db78-kube-api-access-j9snt" (OuterVolumeSpecName: "kube-api-access-j9snt") pod "b04726f2-53d7-41d5-8c0b-03e787f7db78" (UID: "b04726f2-53d7-41d5-8c0b-03e787f7db78"). InnerVolumeSpecName "kube-api-access-j9snt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:27:13 crc kubenswrapper[4835]: I0216 15:27:13.913136 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-scripts" (OuterVolumeSpecName: "scripts") pod "b04726f2-53d7-41d5-8c0b-03e787f7db78" (UID: "b04726f2-53d7-41d5-8c0b-03e787f7db78"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:13 crc kubenswrapper[4835]: I0216 15:27:13.936219 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b04726f2-53d7-41d5-8c0b-03e787f7db78" (UID: "b04726f2-53d7-41d5-8c0b-03e787f7db78"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:13 crc kubenswrapper[4835]: I0216 15:27:13.992584 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b04726f2-53d7-41d5-8c0b-03e787f7db78" (UID: "b04726f2-53d7-41d5-8c0b-03e787f7db78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.005232 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-config-data" (OuterVolumeSpecName: "config-data") pod "b04726f2-53d7-41d5-8c0b-03e787f7db78" (UID: "b04726f2-53d7-41d5-8c0b-03e787f7db78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.006656 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-config-data\") pod \"43efd331-5c78-4ef6-9967-efb83d49f605\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.006737 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnwsx\" (UniqueName: \"kubernetes.io/projected/43efd331-5c78-4ef6-9967-efb83d49f605-kube-api-access-fnwsx\") pod \"43efd331-5c78-4ef6-9967-efb83d49f605\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.006785 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-public-tls-certs\") pod \"43efd331-5c78-4ef6-9967-efb83d49f605\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.006806 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-scripts\") pod \"43efd331-5c78-4ef6-9967-efb83d49f605\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.006870 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-internal-tls-certs\") pod \"43efd331-5c78-4ef6-9967-efb83d49f605\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.006894 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43efd331-5c78-4ef6-9967-efb83d49f605-logs\") pod \"43efd331-5c78-4ef6-9967-efb83d49f605\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.006937 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-combined-ca-bundle\") pod \"43efd331-5c78-4ef6-9967-efb83d49f605\" (UID: \"43efd331-5c78-4ef6-9967-efb83d49f605\") " Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.007325 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.007342 4835 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.007351 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.007360 4835 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04726f2-53d7-41d5-8c0b-03e787f7db78-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.007370 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b04726f2-53d7-41d5-8c0b-03e787f7db78-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.007378 4835 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b04726f2-53d7-41d5-8c0b-03e787f7db78-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.007387 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9snt\" (UniqueName: \"kubernetes.io/projected/b04726f2-53d7-41d5-8c0b-03e787f7db78-kube-api-access-j9snt\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.009758 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43efd331-5c78-4ef6-9967-efb83d49f605-logs" (OuterVolumeSpecName: "logs") pod "43efd331-5c78-4ef6-9967-efb83d49f605" (UID: "43efd331-5c78-4ef6-9967-efb83d49f605"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.014000 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43efd331-5c78-4ef6-9967-efb83d49f605-kube-api-access-fnwsx" (OuterVolumeSpecName: "kube-api-access-fnwsx") pod "43efd331-5c78-4ef6-9967-efb83d49f605" (UID: "43efd331-5c78-4ef6-9967-efb83d49f605"). InnerVolumeSpecName "kube-api-access-fnwsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.014742 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-scripts" (OuterVolumeSpecName: "scripts") pod "43efd331-5c78-4ef6-9967-efb83d49f605" (UID: "43efd331-5c78-4ef6-9967-efb83d49f605"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.055117 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43efd331-5c78-4ef6-9967-efb83d49f605" (UID: "43efd331-5c78-4ef6-9967-efb83d49f605"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.062158 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-config-data" (OuterVolumeSpecName: "config-data") pod "43efd331-5c78-4ef6-9967-efb83d49f605" (UID: "43efd331-5c78-4ef6-9967-efb83d49f605"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.101388 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "43efd331-5c78-4ef6-9967-efb83d49f605" (UID: "43efd331-5c78-4ef6-9967-efb83d49f605"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.109764 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.109798 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.109809 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnwsx\" (UniqueName: \"kubernetes.io/projected/43efd331-5c78-4ef6-9967-efb83d49f605-kube-api-access-fnwsx\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.109819 4835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.109830 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.109840 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43efd331-5c78-4ef6-9967-efb83d49f605-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.118145 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "43efd331-5c78-4ef6-9967-efb83d49f605" (UID: "43efd331-5c78-4ef6-9967-efb83d49f605"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.211311 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fbln5" event={"ID":"df55a8b2-fe66-43ed-9afe-f3c3b6316a51","Type":"ContainerStarted","Data":"669cc378df79b2185e560475a86717f1b1d1d1c3ab542511d5d496c2984eee49"} Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.211842 4835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43efd331-5c78-4ef6-9967-efb83d49f605-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.214739 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b04726f2-53d7-41d5-8c0b-03e787f7db78","Type":"ContainerDied","Data":"1db66da23ce616d898bb59682d8b3ec392b92ad504671cde0f762982785ff38b"} Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.214781 4835 scope.go:117] "RemoveContainer" containerID="2604eeabc27d52e3a38bf09e602afaddc69f1875a94e02bdf471167330515d3f" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.214890 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.217678 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8484d6fc46-gc2kd" event={"ID":"43efd331-5c78-4ef6-9967-efb83d49f605","Type":"ContainerDied","Data":"f071ef05c3ec80e8bfda16314acf511b03c422a2da1419b004ce94e3cb1415fc"} Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.217771 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8484d6fc46-gc2kd" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.254793 4835 scope.go:117] "RemoveContainer" containerID="4d990bb86b9c6b5e53336c39d6cef69e672e64d7f3a0283ee983ca2e1f3314de" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.260027 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-fbln5" podStartSLOduration=2.58370137 podStartE2EDuration="12.260003677s" podCreationTimestamp="2026-02-16 15:27:02 +0000 UTC" firstStartedPulling="2026-02-16 15:27:03.849137766 +0000 UTC m=+1173.141130661" lastFinishedPulling="2026-02-16 15:27:13.525440073 +0000 UTC m=+1182.817432968" observedRunningTime="2026-02-16 15:27:14.234734549 +0000 UTC m=+1183.526727454" watchObservedRunningTime="2026-02-16 15:27:14.260003677 +0000 UTC m=+1183.551996582" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.265883 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.282319 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.293567 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8484d6fc46-gc2kd"] Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.301519 4835 scope.go:117] "RemoveContainer" containerID="8f86eef6f9369fa2567c0db65ed85f84c245db39136c258fba67a9f6e8689729" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.308868 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8484d6fc46-gc2kd"] Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.318514 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:27:14 crc kubenswrapper[4835]: E0216 15:27:14.318986 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerName="ceilometer-central-agent" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.319008 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerName="ceilometer-central-agent" Feb 16 15:27:14 crc kubenswrapper[4835]: E0216 15:27:14.319016 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerName="ceilometer-notification-agent" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.319024 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerName="ceilometer-notification-agent" Feb 16 15:27:14 crc kubenswrapper[4835]: E0216 15:27:14.319038 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64f5816-6cc6-4640-b28b-0f0cfb175d28" containerName="neutron-httpd" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.319046 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64f5816-6cc6-4640-b28b-0f0cfb175d28" containerName="neutron-httpd" Feb 16 15:27:14 crc kubenswrapper[4835]: E0216 15:27:14.319085 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerName="proxy-httpd" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.319093 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerName="proxy-httpd" Feb 16 15:27:14 crc kubenswrapper[4835]: E0216 15:27:14.319109 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerName="sg-core" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.319117 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerName="sg-core" Feb 16 15:27:14 crc kubenswrapper[4835]: E0216 15:27:14.319128 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43efd331-5c78-4ef6-9967-efb83d49f605" containerName="placement-api" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.319137 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="43efd331-5c78-4ef6-9967-efb83d49f605" containerName="placement-api" Feb 16 15:27:14 crc kubenswrapper[4835]: E0216 15:27:14.319157 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64f5816-6cc6-4640-b28b-0f0cfb175d28" containerName="neutron-api" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.319165 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64f5816-6cc6-4640-b28b-0f0cfb175d28" containerName="neutron-api" Feb 16 15:27:14 crc kubenswrapper[4835]: E0216 15:27:14.319177 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43efd331-5c78-4ef6-9967-efb83d49f605" containerName="placement-log" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.319184 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="43efd331-5c78-4ef6-9967-efb83d49f605" containerName="placement-log" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.319449 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerName="proxy-httpd" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.319464 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerName="sg-core" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.319482 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerName="ceilometer-notification-agent" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.319493 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64f5816-6cc6-4640-b28b-0f0cfb175d28" containerName="neutron-api" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.319509 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="43efd331-5c78-4ef6-9967-efb83d49f605" containerName="placement-log" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.319617 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04726f2-53d7-41d5-8c0b-03e787f7db78" containerName="ceilometer-central-agent" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.319635 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64f5816-6cc6-4640-b28b-0f0cfb175d28" containerName="neutron-httpd" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.319648 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="43efd331-5c78-4ef6-9967-efb83d49f605" containerName="placement-api" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.328159 4835 scope.go:117] "RemoveContainer" containerID="d87e31b9944a541dba1c0749e75a30f7fc0d6fdfd0603464e8460dfc3da559ab" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.335361 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.335502 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.342062 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.342238 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.370452 4835 scope.go:117] "RemoveContainer" containerID="a2fb2f10e95c4449c8d9ace8caa8af2482330e3483e040189377e64ea2e7d217" Feb 16 15:27:14 crc kubenswrapper[4835]: E0216 15:27:14.382352 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.418615 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.418718 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-config-data\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.418805 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.418824 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a28dd44d-5358-4b87-9bdc-c48996c193a8-run-httpd\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.418849 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r56ch\" (UniqueName: \"kubernetes.io/projected/a28dd44d-5358-4b87-9bdc-c48996c193a8-kube-api-access-r56ch\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.418870 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a28dd44d-5358-4b87-9bdc-c48996c193a8-log-httpd\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.418897 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-scripts\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.480034 4835 scope.go:117] "RemoveContainer" containerID="68ddc26f2552d702bd4ded6072e6738fd0ac554f91ccb992af69316da25d0bca" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.521253 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.521307 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a28dd44d-5358-4b87-9bdc-c48996c193a8-run-httpd\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.521332 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r56ch\" (UniqueName: \"kubernetes.io/projected/a28dd44d-5358-4b87-9bdc-c48996c193a8-kube-api-access-r56ch\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.521357 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a28dd44d-5358-4b87-9bdc-c48996c193a8-log-httpd\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.521393 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-scripts\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.521488 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.521577 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-config-data\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.526427 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a28dd44d-5358-4b87-9bdc-c48996c193a8-run-httpd\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.526709 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-config-data\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.526895 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a28dd44d-5358-4b87-9bdc-c48996c193a8-log-httpd\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.528800 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.532975 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.534014 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-scripts\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.539450 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r56ch\" (UniqueName: \"kubernetes.io/projected/a28dd44d-5358-4b87-9bdc-c48996c193a8-kube-api-access-r56ch\") pod \"ceilometer-0\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " pod="openstack/ceilometer-0" Feb 16 15:27:14 crc kubenswrapper[4835]: I0216 15:27:14.659610 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:27:15 crc kubenswrapper[4835]: W0216 15:27:15.217786 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda28dd44d_5358_4b87_9bdc_c48996c193a8.slice/crio-e545e532c8056c6b9da2b5ccfb752bf0f9a6b52f5384c722cc3ddff3d083b911 WatchSource:0}: Error finding container e545e532c8056c6b9da2b5ccfb752bf0f9a6b52f5384c722cc3ddff3d083b911: Status 404 returned error can't find the container with id e545e532c8056c6b9da2b5ccfb752bf0f9a6b52f5384c722cc3ddff3d083b911 Feb 16 15:27:15 crc kubenswrapper[4835]: I0216 15:27:15.221491 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:27:15 crc kubenswrapper[4835]: I0216 15:27:15.222910 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:27:15 crc kubenswrapper[4835]: I0216 15:27:15.390997 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43efd331-5c78-4ef6-9967-efb83d49f605" path="/var/lib/kubelet/pods/43efd331-5c78-4ef6-9967-efb83d49f605/volumes" Feb 16 15:27:15 crc kubenswrapper[4835]: I0216 15:27:15.391639 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b04726f2-53d7-41d5-8c0b-03e787f7db78" path="/var/lib/kubelet/pods/b04726f2-53d7-41d5-8c0b-03e787f7db78/volumes" Feb 16 15:27:16 crc kubenswrapper[4835]: I0216 15:27:16.248087 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a28dd44d-5358-4b87-9bdc-c48996c193a8","Type":"ContainerStarted","Data":"f8063e662a2aadba70edd19ba4698a75659d9aeed6444ae751ade911dcdd7fb6"} Feb 16 15:27:16 crc kubenswrapper[4835]: I0216 15:27:16.248569 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a28dd44d-5358-4b87-9bdc-c48996c193a8","Type":"ContainerStarted","Data":"e545e532c8056c6b9da2b5ccfb752bf0f9a6b52f5384c722cc3ddff3d083b911"} Feb 16 15:27:17 crc kubenswrapper[4835]: I0216 15:27:17.261322 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a28dd44d-5358-4b87-9bdc-c48996c193a8","Type":"ContainerStarted","Data":"281e34dc59921edb6510a3c6186ac0e178f40073911099783e6b50bfc7a31d48"} Feb 16 15:27:17 crc kubenswrapper[4835]: I0216 15:27:17.261715 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a28dd44d-5358-4b87-9bdc-c48996c193a8","Type":"ContainerStarted","Data":"17f1b21ccd96bccf91b42df8df343fd6a0d3a26c01d10ef7ef9770bc0cc7cc0c"} Feb 16 15:27:19 crc kubenswrapper[4835]: I0216 15:27:19.288186 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a28dd44d-5358-4b87-9bdc-c48996c193a8","Type":"ContainerStarted","Data":"12f98f86cb3457c59c2d4d20e51c705c1f8fe90b3535484dba3a8fe128d13e8f"} Feb 16 15:27:19 crc kubenswrapper[4835]: I0216 15:27:19.289581 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:27:19 crc kubenswrapper[4835]: I0216 15:27:19.311758 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:27:19 crc kubenswrapper[4835]: I0216 15:27:19.330249 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.304314338 podStartE2EDuration="5.330232033s" podCreationTimestamp="2026-02-16 15:27:14 +0000 UTC" firstStartedPulling="2026-02-16 15:27:15.221289823 +0000 UTC m=+1184.513282718" lastFinishedPulling="2026-02-16 15:27:18.247207518 +0000 UTC m=+1187.539200413" observedRunningTime="2026-02-16 15:27:19.322295556 +0000 UTC m=+1188.614288451" watchObservedRunningTime="2026-02-16 15:27:19.330232033 +0000 UTC m=+1188.622224928" Feb 16 15:27:21 crc kubenswrapper[4835]: I0216 15:27:21.306729 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerName="ceilometer-central-agent" containerID="cri-o://f8063e662a2aadba70edd19ba4698a75659d9aeed6444ae751ade911dcdd7fb6" gracePeriod=30 Feb 16 15:27:21 crc kubenswrapper[4835]: I0216 15:27:21.306790 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerName="sg-core" containerID="cri-o://281e34dc59921edb6510a3c6186ac0e178f40073911099783e6b50bfc7a31d48" gracePeriod=30 Feb 16 15:27:21 crc kubenswrapper[4835]: I0216 15:27:21.306825 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerName="ceilometer-notification-agent" containerID="cri-o://17f1b21ccd96bccf91b42df8df343fd6a0d3a26c01d10ef7ef9770bc0cc7cc0c" gracePeriod=30 Feb 16 15:27:21 crc kubenswrapper[4835]: I0216 15:27:21.306851 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerName="proxy-httpd" containerID="cri-o://12f98f86cb3457c59c2d4d20e51c705c1f8fe90b3535484dba3a8fe128d13e8f" gracePeriod=30 Feb 16 15:27:22 crc kubenswrapper[4835]: I0216 15:27:22.318439 4835 generic.go:334] "Generic (PLEG): container finished" podID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerID="12f98f86cb3457c59c2d4d20e51c705c1f8fe90b3535484dba3a8fe128d13e8f" exitCode=0 Feb 16 15:27:22 crc kubenswrapper[4835]: I0216 15:27:22.318984 4835 generic.go:334] "Generic (PLEG): container finished" podID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerID="281e34dc59921edb6510a3c6186ac0e178f40073911099783e6b50bfc7a31d48" exitCode=2 Feb 16 15:27:22 crc kubenswrapper[4835]: I0216 15:27:22.318995 4835 generic.go:334] "Generic (PLEG): container finished" podID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerID="17f1b21ccd96bccf91b42df8df343fd6a0d3a26c01d10ef7ef9770bc0cc7cc0c" exitCode=0 Feb 16 15:27:22 crc kubenswrapper[4835]: I0216 15:27:22.318489 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a28dd44d-5358-4b87-9bdc-c48996c193a8","Type":"ContainerDied","Data":"12f98f86cb3457c59c2d4d20e51c705c1f8fe90b3535484dba3a8fe128d13e8f"} Feb 16 15:27:22 crc kubenswrapper[4835]: I0216 15:27:22.319030 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a28dd44d-5358-4b87-9bdc-c48996c193a8","Type":"ContainerDied","Data":"281e34dc59921edb6510a3c6186ac0e178f40073911099783e6b50bfc7a31d48"} Feb 16 15:27:22 crc kubenswrapper[4835]: I0216 15:27:22.319043 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a28dd44d-5358-4b87-9bdc-c48996c193a8","Type":"ContainerDied","Data":"17f1b21ccd96bccf91b42df8df343fd6a0d3a26c01d10ef7ef9770bc0cc7cc0c"} Feb 16 15:27:24 crc kubenswrapper[4835]: I0216 15:27:24.341215 4835 generic.go:334] "Generic (PLEG): container finished" podID="df55a8b2-fe66-43ed-9afe-f3c3b6316a51" containerID="669cc378df79b2185e560475a86717f1b1d1d1c3ab542511d5d496c2984eee49" exitCode=0 Feb 16 15:27:24 crc kubenswrapper[4835]: I0216 15:27:24.341405 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fbln5" event={"ID":"df55a8b2-fe66-43ed-9afe-f3c3b6316a51","Type":"ContainerDied","Data":"669cc378df79b2185e560475a86717f1b1d1d1c3ab542511d5d496c2984eee49"} Feb 16 15:27:25 crc kubenswrapper[4835]: I0216 15:27:25.826683 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fbln5" Feb 16 15:27:25 crc kubenswrapper[4835]: I0216 15:27:25.975810 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-scripts\") pod \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\" (UID: \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\") " Feb 16 15:27:25 crc kubenswrapper[4835]: I0216 15:27:25.976215 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-combined-ca-bundle\") pod \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\" (UID: \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\") " Feb 16 15:27:25 crc kubenswrapper[4835]: I0216 15:27:25.976287 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzqff\" (UniqueName: \"kubernetes.io/projected/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-kube-api-access-tzqff\") pod \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\" (UID: \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\") " Feb 16 15:27:25 crc kubenswrapper[4835]: I0216 15:27:25.976316 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-config-data\") pod \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\" (UID: \"df55a8b2-fe66-43ed-9afe-f3c3b6316a51\") " Feb 16 15:27:25 crc kubenswrapper[4835]: I0216 15:27:25.981171 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-scripts" (OuterVolumeSpecName: "scripts") pod "df55a8b2-fe66-43ed-9afe-f3c3b6316a51" (UID: "df55a8b2-fe66-43ed-9afe-f3c3b6316a51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:25 crc kubenswrapper[4835]: I0216 15:27:25.981834 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-kube-api-access-tzqff" (OuterVolumeSpecName: "kube-api-access-tzqff") pod "df55a8b2-fe66-43ed-9afe-f3c3b6316a51" (UID: "df55a8b2-fe66-43ed-9afe-f3c3b6316a51"). InnerVolumeSpecName "kube-api-access-tzqff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.019743 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-config-data" (OuterVolumeSpecName: "config-data") pod "df55a8b2-fe66-43ed-9afe-f3c3b6316a51" (UID: "df55a8b2-fe66-43ed-9afe-f3c3b6316a51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.024650 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df55a8b2-fe66-43ed-9afe-f3c3b6316a51" (UID: "df55a8b2-fe66-43ed-9afe-f3c3b6316a51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.078740 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzqff\" (UniqueName: \"kubernetes.io/projected/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-kube-api-access-tzqff\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.078777 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.078787 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.078795 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df55a8b2-fe66-43ed-9afe-f3c3b6316a51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.361751 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fbln5" event={"ID":"df55a8b2-fe66-43ed-9afe-f3c3b6316a51","Type":"ContainerDied","Data":"668093d7d408c610357f06aa51581434edc637d293b257cd43f2cc33afc3a050"} Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.361796 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="668093d7d408c610357f06aa51581434edc637d293b257cd43f2cc33afc3a050" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.361811 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fbln5" Feb 16 15:27:26 crc kubenswrapper[4835]: E0216 15:27:26.380850 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.495079 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:27:26 crc kubenswrapper[4835]: E0216 15:27:26.495626 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df55a8b2-fe66-43ed-9afe-f3c3b6316a51" containerName="nova-cell0-conductor-db-sync" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.495648 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="df55a8b2-fe66-43ed-9afe-f3c3b6316a51" containerName="nova-cell0-conductor-db-sync" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.495889 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="df55a8b2-fe66-43ed-9afe-f3c3b6316a51" containerName="nova-cell0-conductor-db-sync" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.496744 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.499820 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.501778 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-425fg" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.514125 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.588476 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3655b0a-95e0-4eac-95c6-07197479c042-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f3655b0a-95e0-4eac-95c6-07197479c042\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.588762 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3655b0a-95e0-4eac-95c6-07197479c042-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f3655b0a-95e0-4eac-95c6-07197479c042\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.588904 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn7p7\" (UniqueName: \"kubernetes.io/projected/f3655b0a-95e0-4eac-95c6-07197479c042-kube-api-access-mn7p7\") pod \"nova-cell0-conductor-0\" (UID: \"f3655b0a-95e0-4eac-95c6-07197479c042\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.691149 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3655b0a-95e0-4eac-95c6-07197479c042-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f3655b0a-95e0-4eac-95c6-07197479c042\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.691205 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn7p7\" (UniqueName: \"kubernetes.io/projected/f3655b0a-95e0-4eac-95c6-07197479c042-kube-api-access-mn7p7\") pod \"nova-cell0-conductor-0\" (UID: \"f3655b0a-95e0-4eac-95c6-07197479c042\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.691325 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3655b0a-95e0-4eac-95c6-07197479c042-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f3655b0a-95e0-4eac-95c6-07197479c042\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.697400 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3655b0a-95e0-4eac-95c6-07197479c042-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f3655b0a-95e0-4eac-95c6-07197479c042\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.697727 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3655b0a-95e0-4eac-95c6-07197479c042-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f3655b0a-95e0-4eac-95c6-07197479c042\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.709558 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn7p7\" (UniqueName: \"kubernetes.io/projected/f3655b0a-95e0-4eac-95c6-07197479c042-kube-api-access-mn7p7\") pod \"nova-cell0-conductor-0\" (UID: \"f3655b0a-95e0-4eac-95c6-07197479c042\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:27:26 crc kubenswrapper[4835]: I0216 15:27:26.816433 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.095227 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.201510 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r56ch\" (UniqueName: \"kubernetes.io/projected/a28dd44d-5358-4b87-9bdc-c48996c193a8-kube-api-access-r56ch\") pod \"a28dd44d-5358-4b87-9bdc-c48996c193a8\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.201681 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a28dd44d-5358-4b87-9bdc-c48996c193a8-run-httpd\") pod \"a28dd44d-5358-4b87-9bdc-c48996c193a8\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.201720 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a28dd44d-5358-4b87-9bdc-c48996c193a8-log-httpd\") pod \"a28dd44d-5358-4b87-9bdc-c48996c193a8\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.201779 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-config-data\") pod \"a28dd44d-5358-4b87-9bdc-c48996c193a8\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.201837 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-combined-ca-bundle\") pod \"a28dd44d-5358-4b87-9bdc-c48996c193a8\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.201859 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-scripts\") pod \"a28dd44d-5358-4b87-9bdc-c48996c193a8\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.201882 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-sg-core-conf-yaml\") pod \"a28dd44d-5358-4b87-9bdc-c48996c193a8\" (UID: \"a28dd44d-5358-4b87-9bdc-c48996c193a8\") " Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.202679 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a28dd44d-5358-4b87-9bdc-c48996c193a8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a28dd44d-5358-4b87-9bdc-c48996c193a8" (UID: "a28dd44d-5358-4b87-9bdc-c48996c193a8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.202853 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a28dd44d-5358-4b87-9bdc-c48996c193a8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a28dd44d-5358-4b87-9bdc-c48996c193a8" (UID: "a28dd44d-5358-4b87-9bdc-c48996c193a8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.205606 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-scripts" (OuterVolumeSpecName: "scripts") pod "a28dd44d-5358-4b87-9bdc-c48996c193a8" (UID: "a28dd44d-5358-4b87-9bdc-c48996c193a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.206058 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a28dd44d-5358-4b87-9bdc-c48996c193a8-kube-api-access-r56ch" (OuterVolumeSpecName: "kube-api-access-r56ch") pod "a28dd44d-5358-4b87-9bdc-c48996c193a8" (UID: "a28dd44d-5358-4b87-9bdc-c48996c193a8"). InnerVolumeSpecName "kube-api-access-r56ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.252694 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a28dd44d-5358-4b87-9bdc-c48996c193a8" (UID: "a28dd44d-5358-4b87-9bdc-c48996c193a8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.272569 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a28dd44d-5358-4b87-9bdc-c48996c193a8" (UID: "a28dd44d-5358-4b87-9bdc-c48996c193a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.304001 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.304029 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.304038 4835 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.304047 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r56ch\" (UniqueName: \"kubernetes.io/projected/a28dd44d-5358-4b87-9bdc-c48996c193a8-kube-api-access-r56ch\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.304056 4835 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a28dd44d-5358-4b87-9bdc-c48996c193a8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.304064 4835 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a28dd44d-5358-4b87-9bdc-c48996c193a8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.315584 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-config-data" (OuterVolumeSpecName: "config-data") pod "a28dd44d-5358-4b87-9bdc-c48996c193a8" (UID: "a28dd44d-5358-4b87-9bdc-c48996c193a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:27 crc kubenswrapper[4835]: W0216 15:27:27.331515 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3655b0a_95e0_4eac_95c6_07197479c042.slice/crio-edaabc6410530dbb4eadbe15b209d76cc10db9be39e8e2acaa616a5dd54444e0 WatchSource:0}: Error finding container edaabc6410530dbb4eadbe15b209d76cc10db9be39e8e2acaa616a5dd54444e0: Status 404 returned error can't find the container with id edaabc6410530dbb4eadbe15b209d76cc10db9be39e8e2acaa616a5dd54444e0 Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.332433 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.371946 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f3655b0a-95e0-4eac-95c6-07197479c042","Type":"ContainerStarted","Data":"edaabc6410530dbb4eadbe15b209d76cc10db9be39e8e2acaa616a5dd54444e0"} Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.374324 4835 generic.go:334] "Generic (PLEG): container finished" podID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerID="f8063e662a2aadba70edd19ba4698a75659d9aeed6444ae751ade911dcdd7fb6" exitCode=0 Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.374365 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a28dd44d-5358-4b87-9bdc-c48996c193a8","Type":"ContainerDied","Data":"f8063e662a2aadba70edd19ba4698a75659d9aeed6444ae751ade911dcdd7fb6"} Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.374397 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a28dd44d-5358-4b87-9bdc-c48996c193a8","Type":"ContainerDied","Data":"e545e532c8056c6b9da2b5ccfb752bf0f9a6b52f5384c722cc3ddff3d083b911"} Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.374424 4835 scope.go:117] "RemoveContainer" containerID="12f98f86cb3457c59c2d4d20e51c705c1f8fe90b3535484dba3a8fe128d13e8f" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.374423 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.405902 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28dd44d-5358-4b87-9bdc-c48996c193a8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.410994 4835 scope.go:117] "RemoveContainer" containerID="281e34dc59921edb6510a3c6186ac0e178f40073911099783e6b50bfc7a31d48" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.434040 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.445641 4835 scope.go:117] "RemoveContainer" containerID="17f1b21ccd96bccf91b42df8df343fd6a0d3a26c01d10ef7ef9770bc0cc7cc0c" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.465166 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.492174 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.492440 4835 scope.go:117] "RemoveContainer" containerID="f8063e662a2aadba70edd19ba4698a75659d9aeed6444ae751ade911dcdd7fb6" Feb 16 15:27:27 crc kubenswrapper[4835]: E0216 15:27:27.492728 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerName="sg-core" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.492748 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerName="sg-core" Feb 16 15:27:27 crc kubenswrapper[4835]: E0216 15:27:27.492764 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerName="ceilometer-notification-agent" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.492770 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerName="ceilometer-notification-agent" Feb 16 15:27:27 crc kubenswrapper[4835]: E0216 15:27:27.492795 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerName="ceilometer-central-agent" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.492802 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerName="ceilometer-central-agent" Feb 16 15:27:27 crc kubenswrapper[4835]: E0216 15:27:27.492809 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerName="proxy-httpd" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.492814 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerName="proxy-httpd" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.493009 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerName="ceilometer-central-agent" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.493029 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerName="proxy-httpd" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.493079 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerName="ceilometer-notification-agent" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.493123 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a28dd44d-5358-4b87-9bdc-c48996c193a8" containerName="sg-core" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.494930 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.497001 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.498086 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.503615 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.517271 4835 scope.go:117] "RemoveContainer" containerID="12f98f86cb3457c59c2d4d20e51c705c1f8fe90b3535484dba3a8fe128d13e8f" Feb 16 15:27:27 crc kubenswrapper[4835]: E0216 15:27:27.518914 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12f98f86cb3457c59c2d4d20e51c705c1f8fe90b3535484dba3a8fe128d13e8f\": container with ID starting with 12f98f86cb3457c59c2d4d20e51c705c1f8fe90b3535484dba3a8fe128d13e8f not found: ID does not exist" containerID="12f98f86cb3457c59c2d4d20e51c705c1f8fe90b3535484dba3a8fe128d13e8f" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.518956 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12f98f86cb3457c59c2d4d20e51c705c1f8fe90b3535484dba3a8fe128d13e8f"} err="failed to get container status \"12f98f86cb3457c59c2d4d20e51c705c1f8fe90b3535484dba3a8fe128d13e8f\": rpc error: code = NotFound desc = could not find container \"12f98f86cb3457c59c2d4d20e51c705c1f8fe90b3535484dba3a8fe128d13e8f\": container with ID starting with 12f98f86cb3457c59c2d4d20e51c705c1f8fe90b3535484dba3a8fe128d13e8f not found: ID does not exist" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.518981 4835 scope.go:117] "RemoveContainer" containerID="281e34dc59921edb6510a3c6186ac0e178f40073911099783e6b50bfc7a31d48" Feb 16 15:27:27 crc kubenswrapper[4835]: E0216 15:27:27.519386 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"281e34dc59921edb6510a3c6186ac0e178f40073911099783e6b50bfc7a31d48\": container with ID starting with 281e34dc59921edb6510a3c6186ac0e178f40073911099783e6b50bfc7a31d48 not found: ID does not exist" containerID="281e34dc59921edb6510a3c6186ac0e178f40073911099783e6b50bfc7a31d48" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.519425 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"281e34dc59921edb6510a3c6186ac0e178f40073911099783e6b50bfc7a31d48"} err="failed to get container status \"281e34dc59921edb6510a3c6186ac0e178f40073911099783e6b50bfc7a31d48\": rpc error: code = NotFound desc = could not find container \"281e34dc59921edb6510a3c6186ac0e178f40073911099783e6b50bfc7a31d48\": container with ID starting with 281e34dc59921edb6510a3c6186ac0e178f40073911099783e6b50bfc7a31d48 not found: ID does not exist" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.519452 4835 scope.go:117] "RemoveContainer" containerID="17f1b21ccd96bccf91b42df8df343fd6a0d3a26c01d10ef7ef9770bc0cc7cc0c" Feb 16 15:27:27 crc kubenswrapper[4835]: E0216 15:27:27.519755 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17f1b21ccd96bccf91b42df8df343fd6a0d3a26c01d10ef7ef9770bc0cc7cc0c\": container with ID starting with 17f1b21ccd96bccf91b42df8df343fd6a0d3a26c01d10ef7ef9770bc0cc7cc0c not found: ID does not exist" containerID="17f1b21ccd96bccf91b42df8df343fd6a0d3a26c01d10ef7ef9770bc0cc7cc0c" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.519780 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17f1b21ccd96bccf91b42df8df343fd6a0d3a26c01d10ef7ef9770bc0cc7cc0c"} err="failed to get container status \"17f1b21ccd96bccf91b42df8df343fd6a0d3a26c01d10ef7ef9770bc0cc7cc0c\": rpc error: code = NotFound desc = could not find container \"17f1b21ccd96bccf91b42df8df343fd6a0d3a26c01d10ef7ef9770bc0cc7cc0c\": container with ID starting with 17f1b21ccd96bccf91b42df8df343fd6a0d3a26c01d10ef7ef9770bc0cc7cc0c not found: ID does not exist" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.519795 4835 scope.go:117] "RemoveContainer" containerID="f8063e662a2aadba70edd19ba4698a75659d9aeed6444ae751ade911dcdd7fb6" Feb 16 15:27:27 crc kubenswrapper[4835]: E0216 15:27:27.520342 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8063e662a2aadba70edd19ba4698a75659d9aeed6444ae751ade911dcdd7fb6\": container with ID starting with f8063e662a2aadba70edd19ba4698a75659d9aeed6444ae751ade911dcdd7fb6 not found: ID does not exist" containerID="f8063e662a2aadba70edd19ba4698a75659d9aeed6444ae751ade911dcdd7fb6" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.520382 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8063e662a2aadba70edd19ba4698a75659d9aeed6444ae751ade911dcdd7fb6"} err="failed to get container status \"f8063e662a2aadba70edd19ba4698a75659d9aeed6444ae751ade911dcdd7fb6\": rpc error: code = NotFound desc = could not find container \"f8063e662a2aadba70edd19ba4698a75659d9aeed6444ae751ade911dcdd7fb6\": container with ID starting with f8063e662a2aadba70edd19ba4698a75659d9aeed6444ae751ade911dcdd7fb6 not found: ID does not exist" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.609419 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-run-httpd\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.609461 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-log-httpd\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.609499 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phgw5\" (UniqueName: \"kubernetes.io/projected/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-kube-api-access-phgw5\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.609625 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-scripts\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.609640 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-config-data\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.609727 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.609883 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.711691 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-run-httpd\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.711736 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-log-httpd\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.711772 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phgw5\" (UniqueName: \"kubernetes.io/projected/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-kube-api-access-phgw5\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.711821 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-scripts\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.711838 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-config-data\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.711863 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.711907 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.712193 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-log-httpd\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.712879 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-run-httpd\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.715993 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.716573 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.716951 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-scripts\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.717645 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-config-data\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.730338 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phgw5\" (UniqueName: \"kubernetes.io/projected/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-kube-api-access-phgw5\") pod \"ceilometer-0\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " pod="openstack/ceilometer-0" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.818818 4835 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod8c382add-fd25-4394-a43a-b4992607986b"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod8c382add-fd25-4394-a43a-b4992607986b] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8c382add_fd25_4394_a43a_b4992607986b.slice" Feb 16 15:27:27 crc kubenswrapper[4835]: I0216 15:27:27.819128 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:27:28 crc kubenswrapper[4835]: W0216 15:27:28.319745 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5858ccbc_3ac8_49ee_88b1_d0c59b89288b.slice/crio-5b72842245a5c3f41979fda19469e044bf5634c46b0899eeb9ade1ab898998dd WatchSource:0}: Error finding container 5b72842245a5c3f41979fda19469e044bf5634c46b0899eeb9ade1ab898998dd: Status 404 returned error can't find the container with id 5b72842245a5c3f41979fda19469e044bf5634c46b0899eeb9ade1ab898998dd Feb 16 15:27:28 crc kubenswrapper[4835]: I0216 15:27:28.320713 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:27:28 crc kubenswrapper[4835]: I0216 15:27:28.384335 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f3655b0a-95e0-4eac-95c6-07197479c042","Type":"ContainerStarted","Data":"481b6a0bc949336f051c4c6326aa54dc8061e4aeb5a4278d9d5af7c686ad5bff"} Feb 16 15:27:28 crc kubenswrapper[4835]: I0216 15:27:28.385295 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 15:27:28 crc kubenswrapper[4835]: I0216 15:27:28.387893 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5858ccbc-3ac8-49ee-88b1-d0c59b89288b","Type":"ContainerStarted","Data":"5b72842245a5c3f41979fda19469e044bf5634c46b0899eeb9ade1ab898998dd"} Feb 16 15:27:28 crc kubenswrapper[4835]: I0216 15:27:28.410174 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.410155525 podStartE2EDuration="2.410155525s" podCreationTimestamp="2026-02-16 15:27:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:27:28.403318937 +0000 UTC m=+1197.695311832" watchObservedRunningTime="2026-02-16 15:27:28.410155525 +0000 UTC m=+1197.702148410" Feb 16 15:27:29 crc kubenswrapper[4835]: I0216 15:27:29.390193 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a28dd44d-5358-4b87-9bdc-c48996c193a8" path="/var/lib/kubelet/pods/a28dd44d-5358-4b87-9bdc-c48996c193a8/volumes" Feb 16 15:27:29 crc kubenswrapper[4835]: I0216 15:27:29.397118 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5858ccbc-3ac8-49ee-88b1-d0c59b89288b","Type":"ContainerStarted","Data":"bf8e6ad7e5cfdd2869ada80d40a449bc30c79aaf8684d432c12dda7efa35dc7d"} Feb 16 15:27:30 crc kubenswrapper[4835]: I0216 15:27:30.409783 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5858ccbc-3ac8-49ee-88b1-d0c59b89288b","Type":"ContainerStarted","Data":"38678d41dc1f3e1a3ab124aae4191b7f9222c455c99356662b67847a389409cf"} Feb 16 15:27:30 crc kubenswrapper[4835]: I0216 15:27:30.410298 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5858ccbc-3ac8-49ee-88b1-d0c59b89288b","Type":"ContainerStarted","Data":"785419cb28c19ed24e4b94f1720c9ddeef4e1ed5a18667890a1be9422b07d7cc"} Feb 16 15:27:32 crc kubenswrapper[4835]: I0216 15:27:32.445796 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5858ccbc-3ac8-49ee-88b1-d0c59b89288b","Type":"ContainerStarted","Data":"b327fc0893b5e0d6b20fcd988c7c33e5720e7d2a9c8b2a999fb647972a234083"} Feb 16 15:27:32 crc kubenswrapper[4835]: I0216 15:27:32.447454 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:27:32 crc kubenswrapper[4835]: I0216 15:27:32.485774 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.249400733 podStartE2EDuration="5.485716726s" podCreationTimestamp="2026-02-16 15:27:27 +0000 UTC" firstStartedPulling="2026-02-16 15:27:28.32164465 +0000 UTC m=+1197.613637545" lastFinishedPulling="2026-02-16 15:27:31.557960643 +0000 UTC m=+1200.849953538" observedRunningTime="2026-02-16 15:27:32.473909398 +0000 UTC m=+1201.765902313" watchObservedRunningTime="2026-02-16 15:27:32.485716726 +0000 UTC m=+1201.777709641" Feb 16 15:27:36 crc kubenswrapper[4835]: I0216 15:27:36.853434 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.317804 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-xt45f"] Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.319188 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xt45f" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.323191 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.323197 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.328793 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xt45f"] Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.403306 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-scripts\") pod \"nova-cell0-cell-mapping-xt45f\" (UID: \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\") " pod="openstack/nova-cell0-cell-mapping-xt45f" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.403628 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xt45f\" (UID: \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\") " pod="openstack/nova-cell0-cell-mapping-xt45f" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.403740 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv2md\" (UniqueName: \"kubernetes.io/projected/d67a4473-b44e-41b6-b975-25f3f4f34ad8-kube-api-access-qv2md\") pod \"nova-cell0-cell-mapping-xt45f\" (UID: \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\") " pod="openstack/nova-cell0-cell-mapping-xt45f" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.403865 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-config-data\") pod \"nova-cell0-cell-mapping-xt45f\" (UID: \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\") " pod="openstack/nova-cell0-cell-mapping-xt45f" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.500149 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.502077 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.507647 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.526383 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xt45f\" (UID: \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\") " pod="openstack/nova-cell0-cell-mapping-xt45f" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.526494 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv2md\" (UniqueName: \"kubernetes.io/projected/d67a4473-b44e-41b6-b975-25f3f4f34ad8-kube-api-access-qv2md\") pod \"nova-cell0-cell-mapping-xt45f\" (UID: \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\") " pod="openstack/nova-cell0-cell-mapping-xt45f" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.526613 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-config-data\") pod \"nova-cell0-cell-mapping-xt45f\" (UID: \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\") " pod="openstack/nova-cell0-cell-mapping-xt45f" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.531094 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-scripts\") pod \"nova-cell0-cell-mapping-xt45f\" (UID: \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\") " pod="openstack/nova-cell0-cell-mapping-xt45f" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.563111 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.572049 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-scripts\") pod \"nova-cell0-cell-mapping-xt45f\" (UID: \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\") " pod="openstack/nova-cell0-cell-mapping-xt45f" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.572362 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-config-data\") pod \"nova-cell0-cell-mapping-xt45f\" (UID: \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\") " pod="openstack/nova-cell0-cell-mapping-xt45f" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.581071 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xt45f\" (UID: \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\") " pod="openstack/nova-cell0-cell-mapping-xt45f" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.583559 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv2md\" (UniqueName: \"kubernetes.io/projected/d67a4473-b44e-41b6-b975-25f3f4f34ad8-kube-api-access-qv2md\") pod \"nova-cell0-cell-mapping-xt45f\" (UID: \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\") " pod="openstack/nova-cell0-cell-mapping-xt45f" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.636882 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d050dfb9-af7a-4642-be0e-892734cca6e0-logs\") pod \"nova-api-0\" (UID: \"d050dfb9-af7a-4642-be0e-892734cca6e0\") " pod="openstack/nova-api-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.636942 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72v9h\" (UniqueName: \"kubernetes.io/projected/d050dfb9-af7a-4642-be0e-892734cca6e0-kube-api-access-72v9h\") pod \"nova-api-0\" (UID: \"d050dfb9-af7a-4642-be0e-892734cca6e0\") " pod="openstack/nova-api-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.637042 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d050dfb9-af7a-4642-be0e-892734cca6e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d050dfb9-af7a-4642-be0e-892734cca6e0\") " pod="openstack/nova-api-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.637157 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d050dfb9-af7a-4642-be0e-892734cca6e0-config-data\") pod \"nova-api-0\" (UID: \"d050dfb9-af7a-4642-be0e-892734cca6e0\") " pod="openstack/nova-api-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.640734 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xt45f" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.739251 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d050dfb9-af7a-4642-be0e-892734cca6e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d050dfb9-af7a-4642-be0e-892734cca6e0\") " pod="openstack/nova-api-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.739349 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d050dfb9-af7a-4642-be0e-892734cca6e0-config-data\") pod \"nova-api-0\" (UID: \"d050dfb9-af7a-4642-be0e-892734cca6e0\") " pod="openstack/nova-api-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.739414 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d050dfb9-af7a-4642-be0e-892734cca6e0-logs\") pod \"nova-api-0\" (UID: \"d050dfb9-af7a-4642-be0e-892734cca6e0\") " pod="openstack/nova-api-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.739431 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72v9h\" (UniqueName: \"kubernetes.io/projected/d050dfb9-af7a-4642-be0e-892734cca6e0-kube-api-access-72v9h\") pod \"nova-api-0\" (UID: \"d050dfb9-af7a-4642-be0e-892734cca6e0\") " pod="openstack/nova-api-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.745455 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.747154 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.749590 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d050dfb9-af7a-4642-be0e-892734cca6e0-config-data\") pod \"nova-api-0\" (UID: \"d050dfb9-af7a-4642-be0e-892734cca6e0\") " pod="openstack/nova-api-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.749934 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d050dfb9-af7a-4642-be0e-892734cca6e0-logs\") pod \"nova-api-0\" (UID: \"d050dfb9-af7a-4642-be0e-892734cca6e0\") " pod="openstack/nova-api-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.752152 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d050dfb9-af7a-4642-be0e-892734cca6e0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d050dfb9-af7a-4642-be0e-892734cca6e0\") " pod="openstack/nova-api-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.775028 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.778574 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72v9h\" (UniqueName: \"kubernetes.io/projected/d050dfb9-af7a-4642-be0e-892734cca6e0-kube-api-access-72v9h\") pod \"nova-api-0\" (UID: \"d050dfb9-af7a-4642-be0e-892734cca6e0\") " pod="openstack/nova-api-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.813598 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.836383 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.838561 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.841349 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/373e7d99-8fac-412f-a502-ac6ecee3e809-logs\") pod \"nova-metadata-0\" (UID: \"373e7d99-8fac-412f-a502-ac6ecee3e809\") " pod="openstack/nova-metadata-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.841384 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373e7d99-8fac-412f-a502-ac6ecee3e809-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"373e7d99-8fac-412f-a502-ac6ecee3e809\") " pod="openstack/nova-metadata-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.841514 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/373e7d99-8fac-412f-a502-ac6ecee3e809-config-data\") pod \"nova-metadata-0\" (UID: \"373e7d99-8fac-412f-a502-ac6ecee3e809\") " pod="openstack/nova-metadata-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.841548 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pnz8\" (UniqueName: \"kubernetes.io/projected/373e7d99-8fac-412f-a502-ac6ecee3e809-kube-api-access-4pnz8\") pod \"nova-metadata-0\" (UID: \"373e7d99-8fac-412f-a502-ac6ecee3e809\") " pod="openstack/nova-metadata-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.857384 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.866736 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.940799 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.946951 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn2rc\" (UniqueName: \"kubernetes.io/projected/b3694d0f-1549-4763-9eb8-b91775af1371-kube-api-access-cn2rc\") pod \"nova-scheduler-0\" (UID: \"b3694d0f-1549-4763-9eb8-b91775af1371\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.947025 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/373e7d99-8fac-412f-a502-ac6ecee3e809-logs\") pod \"nova-metadata-0\" (UID: \"373e7d99-8fac-412f-a502-ac6ecee3e809\") " pod="openstack/nova-metadata-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.947047 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373e7d99-8fac-412f-a502-ac6ecee3e809-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"373e7d99-8fac-412f-a502-ac6ecee3e809\") " pod="openstack/nova-metadata-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.947102 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3694d0f-1549-4763-9eb8-b91775af1371-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b3694d0f-1549-4763-9eb8-b91775af1371\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.947161 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/373e7d99-8fac-412f-a502-ac6ecee3e809-config-data\") pod \"nova-metadata-0\" (UID: \"373e7d99-8fac-412f-a502-ac6ecee3e809\") " pod="openstack/nova-metadata-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.947175 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pnz8\" (UniqueName: \"kubernetes.io/projected/373e7d99-8fac-412f-a502-ac6ecee3e809-kube-api-access-4pnz8\") pod \"nova-metadata-0\" (UID: \"373e7d99-8fac-412f-a502-ac6ecee3e809\") " pod="openstack/nova-metadata-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.947190 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3694d0f-1549-4763-9eb8-b91775af1371-config-data\") pod \"nova-scheduler-0\" (UID: \"b3694d0f-1549-4763-9eb8-b91775af1371\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.947620 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/373e7d99-8fac-412f-a502-ac6ecee3e809-logs\") pod \"nova-metadata-0\" (UID: \"373e7d99-8fac-412f-a502-ac6ecee3e809\") " pod="openstack/nova-metadata-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.960763 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373e7d99-8fac-412f-a502-ac6ecee3e809-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"373e7d99-8fac-412f-a502-ac6ecee3e809\") " pod="openstack/nova-metadata-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.961121 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/373e7d99-8fac-412f-a502-ac6ecee3e809-config-data\") pod \"nova-metadata-0\" (UID: \"373e7d99-8fac-412f-a502-ac6ecee3e809\") " pod="openstack/nova-metadata-0" Feb 16 15:27:37 crc kubenswrapper[4835]: I0216 15:27:37.987096 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pnz8\" (UniqueName: \"kubernetes.io/projected/373e7d99-8fac-412f-a502-ac6ecee3e809-kube-api-access-4pnz8\") pod \"nova-metadata-0\" (UID: \"373e7d99-8fac-412f-a502-ac6ecee3e809\") " pod="openstack/nova-metadata-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.050680 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3694d0f-1549-4763-9eb8-b91775af1371-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b3694d0f-1549-4763-9eb8-b91775af1371\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.050772 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3694d0f-1549-4763-9eb8-b91775af1371-config-data\") pod \"nova-scheduler-0\" (UID: \"b3694d0f-1549-4763-9eb8-b91775af1371\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.050836 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn2rc\" (UniqueName: \"kubernetes.io/projected/b3694d0f-1549-4763-9eb8-b91775af1371-kube-api-access-cn2rc\") pod \"nova-scheduler-0\" (UID: \"b3694d0f-1549-4763-9eb8-b91775af1371\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.056558 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3694d0f-1549-4763-9eb8-b91775af1371-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b3694d0f-1549-4763-9eb8-b91775af1371\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.057133 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3694d0f-1549-4763-9eb8-b91775af1371-config-data\") pod \"nova-scheduler-0\" (UID: \"b3694d0f-1549-4763-9eb8-b91775af1371\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.065382 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.075145 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn2rc\" (UniqueName: \"kubernetes.io/projected/b3694d0f-1549-4763-9eb8-b91775af1371-kube-api-access-cn2rc\") pod \"nova-scheduler-0\" (UID: \"b3694d0f-1549-4763-9eb8-b91775af1371\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.093959 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.107322 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.129189 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.150002 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-slzml"] Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.151758 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.154402 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ad6fbc1-2208-46a2-8fe7-87aebc51475d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.154432 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ad6fbc1-2208-46a2-8fe7-87aebc51475d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.154457 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9kxt\" (UniqueName: \"kubernetes.io/projected/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-kube-api-access-l9kxt\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ad6fbc1-2208-46a2-8fe7-87aebc51475d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.172652 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-slzml"] Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.180154 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.213707 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.257689 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.257783 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-config\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.257811 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.257865 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ad6fbc1-2208-46a2-8fe7-87aebc51475d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.257895 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8csxp\" (UniqueName: \"kubernetes.io/projected/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-kube-api-access-8csxp\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.257921 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ad6fbc1-2208-46a2-8fe7-87aebc51475d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.257952 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9kxt\" (UniqueName: \"kubernetes.io/projected/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-kube-api-access-l9kxt\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ad6fbc1-2208-46a2-8fe7-87aebc51475d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.258018 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.258049 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-dns-svc\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.266575 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ad6fbc1-2208-46a2-8fe7-87aebc51475d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.268061 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ad6fbc1-2208-46a2-8fe7-87aebc51475d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.282771 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9kxt\" (UniqueName: \"kubernetes.io/projected/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-kube-api-access-l9kxt\") pod \"nova-cell1-novncproxy-0\" (UID: \"3ad6fbc1-2208-46a2-8fe7-87aebc51475d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.359824 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.359887 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-config\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.359908 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.359942 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8csxp\" (UniqueName: \"kubernetes.io/projected/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-kube-api-access-8csxp\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.360002 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.360023 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-dns-svc\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.360986 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-dns-svc\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.361060 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.361749 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-config\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.361923 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.361958 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.383138 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8csxp\" (UniqueName: \"kubernetes.io/projected/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-kube-api-access-8csxp\") pod \"dnsmasq-dns-757b4f8459-slzml\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.457376 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.506025 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.629702 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xt45f"] Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.705950 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.890550 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xj8wc"] Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.892197 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xj8wc" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.897595 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.897879 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.900982 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xj8wc"] Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.969786 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-scripts\") pod \"nova-cell1-conductor-db-sync-xj8wc\" (UID: \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\") " pod="openstack/nova-cell1-conductor-db-sync-xj8wc" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.969840 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xj8wc\" (UID: \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\") " pod="openstack/nova-cell1-conductor-db-sync-xj8wc" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.970023 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-config-data\") pod \"nova-cell1-conductor-db-sync-xj8wc\" (UID: \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\") " pod="openstack/nova-cell1-conductor-db-sync-xj8wc" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.970070 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9btv\" (UniqueName: \"kubernetes.io/projected/81ab2659-cc62-4f55-981e-2f2887d7fbe1-kube-api-access-p9btv\") pod \"nova-cell1-conductor-db-sync-xj8wc\" (UID: \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\") " pod="openstack/nova-cell1-conductor-db-sync-xj8wc" Feb 16 15:27:38 crc kubenswrapper[4835]: I0216 15:27:38.983201 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.003515 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.072662 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-scripts\") pod \"nova-cell1-conductor-db-sync-xj8wc\" (UID: \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\") " pod="openstack/nova-cell1-conductor-db-sync-xj8wc" Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.072711 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xj8wc\" (UID: \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\") " pod="openstack/nova-cell1-conductor-db-sync-xj8wc" Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.072759 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-config-data\") pod \"nova-cell1-conductor-db-sync-xj8wc\" (UID: \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\") " pod="openstack/nova-cell1-conductor-db-sync-xj8wc" Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.072779 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9btv\" (UniqueName: \"kubernetes.io/projected/81ab2659-cc62-4f55-981e-2f2887d7fbe1-kube-api-access-p9btv\") pod \"nova-cell1-conductor-db-sync-xj8wc\" (UID: \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\") " pod="openstack/nova-cell1-conductor-db-sync-xj8wc" Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.077094 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xj8wc\" (UID: \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\") " pod="openstack/nova-cell1-conductor-db-sync-xj8wc" Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.077430 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-scripts\") pod \"nova-cell1-conductor-db-sync-xj8wc\" (UID: \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\") " pod="openstack/nova-cell1-conductor-db-sync-xj8wc" Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.078114 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-config-data\") pod \"nova-cell1-conductor-db-sync-xj8wc\" (UID: \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\") " pod="openstack/nova-cell1-conductor-db-sync-xj8wc" Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.087729 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9btv\" (UniqueName: \"kubernetes.io/projected/81ab2659-cc62-4f55-981e-2f2887d7fbe1-kube-api-access-p9btv\") pod \"nova-cell1-conductor-db-sync-xj8wc\" (UID: \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\") " pod="openstack/nova-cell1-conductor-db-sync-xj8wc" Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.187720 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:27:39 crc kubenswrapper[4835]: W0216 15:27:39.194755 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1be0de54_02e9_4cfa_9fef_b5e5c00bd572.slice/crio-2c26d3a68ded3ec4beb1c88520f6b0616608128096c60667f4cd51015ca1e434 WatchSource:0}: Error finding container 2c26d3a68ded3ec4beb1c88520f6b0616608128096c60667f4cd51015ca1e434: Status 404 returned error can't find the container with id 2c26d3a68ded3ec4beb1c88520f6b0616608128096c60667f4cd51015ca1e434 Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.199333 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-slzml"] Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.254223 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xj8wc" Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.549063 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"373e7d99-8fac-412f-a502-ac6ecee3e809","Type":"ContainerStarted","Data":"375415bbec2f86b865d6fb8fbd52858a111da6d50f1301f710aef67fdde874d2"} Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.568852 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xt45f" event={"ID":"d67a4473-b44e-41b6-b975-25f3f4f34ad8","Type":"ContainerStarted","Data":"1df3385d1f34bcbf22783a9e95e43343e0835c13fad43cc54c6a970978a3135f"} Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.568918 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xt45f" event={"ID":"d67a4473-b44e-41b6-b975-25f3f4f34ad8","Type":"ContainerStarted","Data":"531cac6892f97217b376c35597f666b7f56db8fc28d8aff24317a362cc01bf89"} Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.571327 4835 generic.go:334] "Generic (PLEG): container finished" podID="1be0de54-02e9-4cfa-9fef-b5e5c00bd572" containerID="ea9eb25ea57ec59487089697cd0a93c6a354062e1d49d61292dba5c1b1ae5322" exitCode=0 Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.571373 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-slzml" event={"ID":"1be0de54-02e9-4cfa-9fef-b5e5c00bd572","Type":"ContainerDied","Data":"ea9eb25ea57ec59487089697cd0a93c6a354062e1d49d61292dba5c1b1ae5322"} Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.571426 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-slzml" event={"ID":"1be0de54-02e9-4cfa-9fef-b5e5c00bd572","Type":"ContainerStarted","Data":"2c26d3a68ded3ec4beb1c88520f6b0616608128096c60667f4cd51015ca1e434"} Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.572623 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3ad6fbc1-2208-46a2-8fe7-87aebc51475d","Type":"ContainerStarted","Data":"5c5f049aa7b1bef51bea37470d16d4ef0d07df8c1a85a9271631eb32501dcf3c"} Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.589580 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b3694d0f-1549-4763-9eb8-b91775af1371","Type":"ContainerStarted","Data":"d12d7095bdd0a36f3e4278c05dc6f1808747ab1ab8082bfcdde68170f66140bf"} Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.608667 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d050dfb9-af7a-4642-be0e-892734cca6e0","Type":"ContainerStarted","Data":"a31b280d38bffc0a4be3e9acf31c91e4d20e6b2bf299be0bd8284e5456a77769"} Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.629133 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-xt45f" podStartSLOduration=2.629112953 podStartE2EDuration="2.629112953s" podCreationTimestamp="2026-02-16 15:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:27:39.605899929 +0000 UTC m=+1208.897892834" watchObservedRunningTime="2026-02-16 15:27:39.629112953 +0000 UTC m=+1208.921105848" Feb 16 15:27:39 crc kubenswrapper[4835]: I0216 15:27:39.822889 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xj8wc"] Feb 16 15:27:40 crc kubenswrapper[4835]: I0216 15:27:40.640039 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xj8wc" event={"ID":"81ab2659-cc62-4f55-981e-2f2887d7fbe1","Type":"ContainerStarted","Data":"86487022e8d0a5e138828d7edafbdf59326490463e88e4a30b258ec8a8ee3d78"} Feb 16 15:27:40 crc kubenswrapper[4835]: I0216 15:27:40.640625 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xj8wc" event={"ID":"81ab2659-cc62-4f55-981e-2f2887d7fbe1","Type":"ContainerStarted","Data":"9bb1b90ce197152e6b1f8dfea7c7e65c9d460534cbeb614ff0a5898286b58181"} Feb 16 15:27:40 crc kubenswrapper[4835]: I0216 15:27:40.642617 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-slzml" event={"ID":"1be0de54-02e9-4cfa-9fef-b5e5c00bd572","Type":"ContainerStarted","Data":"f8af5327e4adae2fb872968e4194451156796e6686e87282eb5d68372b253c6f"} Feb 16 15:27:40 crc kubenswrapper[4835]: I0216 15:27:40.642941 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:40 crc kubenswrapper[4835]: I0216 15:27:40.663481 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-xj8wc" podStartSLOduration=2.6634624110000003 podStartE2EDuration="2.663462411s" podCreationTimestamp="2026-02-16 15:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:27:40.663358349 +0000 UTC m=+1209.955351244" watchObservedRunningTime="2026-02-16 15:27:40.663462411 +0000 UTC m=+1209.955455296" Feb 16 15:27:40 crc kubenswrapper[4835]: I0216 15:27:40.684007 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-slzml" podStartSLOduration=3.683990886 podStartE2EDuration="3.683990886s" podCreationTimestamp="2026-02-16 15:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:27:40.680739751 +0000 UTC m=+1209.972732646" watchObservedRunningTime="2026-02-16 15:27:40.683990886 +0000 UTC m=+1209.975983781" Feb 16 15:27:41 crc kubenswrapper[4835]: I0216 15:27:41.052821 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:27:41 crc kubenswrapper[4835]: I0216 15:27:41.064844 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:27:41 crc kubenswrapper[4835]: E0216 15:27:41.388257 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:27:43 crc kubenswrapper[4835]: I0216 15:27:43.676154 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3ad6fbc1-2208-46a2-8fe7-87aebc51475d","Type":"ContainerStarted","Data":"4c25fa407b6e2911ba45b98d6cbe2c4c98e2f4685826baee6bfa8014de04e935"} Feb 16 15:27:43 crc kubenswrapper[4835]: I0216 15:27:43.676250 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3ad6fbc1-2208-46a2-8fe7-87aebc51475d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4c25fa407b6e2911ba45b98d6cbe2c4c98e2f4685826baee6bfa8014de04e935" gracePeriod=30 Feb 16 15:27:43 crc kubenswrapper[4835]: I0216 15:27:43.680232 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b3694d0f-1549-4763-9eb8-b91775af1371","Type":"ContainerStarted","Data":"89b86e2aec2f6215a37389e9369d98afafd6b730623613b7bdc397ceb1c33d8c"} Feb 16 15:27:43 crc kubenswrapper[4835]: I0216 15:27:43.685164 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d050dfb9-af7a-4642-be0e-892734cca6e0","Type":"ContainerStarted","Data":"441b5702abc89c3b8a505abbdc4c43d0e9973330631e70208a54432a02559021"} Feb 16 15:27:43 crc kubenswrapper[4835]: I0216 15:27:43.685235 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d050dfb9-af7a-4642-be0e-892734cca6e0","Type":"ContainerStarted","Data":"7980abf0797529f1347a73692caebe7e8a842b169341592ee93fe3c9820f930d"} Feb 16 15:27:43 crc kubenswrapper[4835]: I0216 15:27:43.689999 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"373e7d99-8fac-412f-a502-ac6ecee3e809","Type":"ContainerStarted","Data":"8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d"} Feb 16 15:27:43 crc kubenswrapper[4835]: I0216 15:27:43.690211 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"373e7d99-8fac-412f-a502-ac6ecee3e809","Type":"ContainerStarted","Data":"4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67"} Feb 16 15:27:43 crc kubenswrapper[4835]: I0216 15:27:43.690078 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="373e7d99-8fac-412f-a502-ac6ecee3e809" containerName="nova-metadata-metadata" containerID="cri-o://8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d" gracePeriod=30 Feb 16 15:27:43 crc kubenswrapper[4835]: I0216 15:27:43.690060 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="373e7d99-8fac-412f-a502-ac6ecee3e809" containerName="nova-metadata-log" containerID="cri-o://4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67" gracePeriod=30 Feb 16 15:27:43 crc kubenswrapper[4835]: I0216 15:27:43.713976 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.528416565 podStartE2EDuration="6.713952656s" podCreationTimestamp="2026-02-16 15:27:37 +0000 UTC" firstStartedPulling="2026-02-16 15:27:39.185771691 +0000 UTC m=+1208.477764586" lastFinishedPulling="2026-02-16 15:27:42.371307782 +0000 UTC m=+1211.663300677" observedRunningTime="2026-02-16 15:27:43.693423512 +0000 UTC m=+1212.985416407" watchObservedRunningTime="2026-02-16 15:27:43.713952656 +0000 UTC m=+1213.005945561" Feb 16 15:27:43 crc kubenswrapper[4835]: I0216 15:27:43.734368 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.315447801 podStartE2EDuration="6.734313266s" podCreationTimestamp="2026-02-16 15:27:37 +0000 UTC" firstStartedPulling="2026-02-16 15:27:38.9391176 +0000 UTC m=+1208.231110495" lastFinishedPulling="2026-02-16 15:27:42.357983065 +0000 UTC m=+1211.649975960" observedRunningTime="2026-02-16 15:27:43.722203531 +0000 UTC m=+1213.014196446" watchObservedRunningTime="2026-02-16 15:27:43.734313266 +0000 UTC m=+1213.026306171" Feb 16 15:27:43 crc kubenswrapper[4835]: I0216 15:27:43.750158 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.352413883 podStartE2EDuration="6.750137568s" podCreationTimestamp="2026-02-16 15:27:37 +0000 UTC" firstStartedPulling="2026-02-16 15:27:38.967061807 +0000 UTC m=+1208.259054702" lastFinishedPulling="2026-02-16 15:27:42.364785472 +0000 UTC m=+1211.656778387" observedRunningTime="2026-02-16 15:27:43.738181907 +0000 UTC m=+1213.030174802" watchObservedRunningTime="2026-02-16 15:27:43.750137568 +0000 UTC m=+1213.042130473" Feb 16 15:27:43 crc kubenswrapper[4835]: I0216 15:27:43.765898 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.103918074 podStartE2EDuration="6.765882778s" podCreationTimestamp="2026-02-16 15:27:37 +0000 UTC" firstStartedPulling="2026-02-16 15:27:38.703299131 +0000 UTC m=+1207.995292026" lastFinishedPulling="2026-02-16 15:27:42.365263835 +0000 UTC m=+1211.657256730" observedRunningTime="2026-02-16 15:27:43.757708255 +0000 UTC m=+1213.049701160" watchObservedRunningTime="2026-02-16 15:27:43.765882778 +0000 UTC m=+1213.057875673" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.470742 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.612108 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pnz8\" (UniqueName: \"kubernetes.io/projected/373e7d99-8fac-412f-a502-ac6ecee3e809-kube-api-access-4pnz8\") pod \"373e7d99-8fac-412f-a502-ac6ecee3e809\" (UID: \"373e7d99-8fac-412f-a502-ac6ecee3e809\") " Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.612252 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/373e7d99-8fac-412f-a502-ac6ecee3e809-logs\") pod \"373e7d99-8fac-412f-a502-ac6ecee3e809\" (UID: \"373e7d99-8fac-412f-a502-ac6ecee3e809\") " Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.612285 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/373e7d99-8fac-412f-a502-ac6ecee3e809-config-data\") pod \"373e7d99-8fac-412f-a502-ac6ecee3e809\" (UID: \"373e7d99-8fac-412f-a502-ac6ecee3e809\") " Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.612653 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/373e7d99-8fac-412f-a502-ac6ecee3e809-logs" (OuterVolumeSpecName: "logs") pod "373e7d99-8fac-412f-a502-ac6ecee3e809" (UID: "373e7d99-8fac-412f-a502-ac6ecee3e809"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.612729 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373e7d99-8fac-412f-a502-ac6ecee3e809-combined-ca-bundle\") pod \"373e7d99-8fac-412f-a502-ac6ecee3e809\" (UID: \"373e7d99-8fac-412f-a502-ac6ecee3e809\") " Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.613677 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/373e7d99-8fac-412f-a502-ac6ecee3e809-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.617663 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/373e7d99-8fac-412f-a502-ac6ecee3e809-kube-api-access-4pnz8" (OuterVolumeSpecName: "kube-api-access-4pnz8") pod "373e7d99-8fac-412f-a502-ac6ecee3e809" (UID: "373e7d99-8fac-412f-a502-ac6ecee3e809"). InnerVolumeSpecName "kube-api-access-4pnz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.652565 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/373e7d99-8fac-412f-a502-ac6ecee3e809-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "373e7d99-8fac-412f-a502-ac6ecee3e809" (UID: "373e7d99-8fac-412f-a502-ac6ecee3e809"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.701187 4835 generic.go:334] "Generic (PLEG): container finished" podID="373e7d99-8fac-412f-a502-ac6ecee3e809" containerID="8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d" exitCode=0 Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.701222 4835 generic.go:334] "Generic (PLEG): container finished" podID="373e7d99-8fac-412f-a502-ac6ecee3e809" containerID="4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67" exitCode=143 Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.701229 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"373e7d99-8fac-412f-a502-ac6ecee3e809","Type":"ContainerDied","Data":"8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d"} Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.701265 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"373e7d99-8fac-412f-a502-ac6ecee3e809","Type":"ContainerDied","Data":"4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67"} Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.701277 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.701321 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"373e7d99-8fac-412f-a502-ac6ecee3e809","Type":"ContainerDied","Data":"375415bbec2f86b865d6fb8fbd52858a111da6d50f1301f710aef67fdde874d2"} Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.701341 4835 scope.go:117] "RemoveContainer" containerID="8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.716146 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pnz8\" (UniqueName: \"kubernetes.io/projected/373e7d99-8fac-412f-a502-ac6ecee3e809-kube-api-access-4pnz8\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.716176 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373e7d99-8fac-412f-a502-ac6ecee3e809-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.722897 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/373e7d99-8fac-412f-a502-ac6ecee3e809-config-data" (OuterVolumeSpecName: "config-data") pod "373e7d99-8fac-412f-a502-ac6ecee3e809" (UID: "373e7d99-8fac-412f-a502-ac6ecee3e809"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.784307 4835 scope.go:117] "RemoveContainer" containerID="4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.802595 4835 scope.go:117] "RemoveContainer" containerID="8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d" Feb 16 15:27:44 crc kubenswrapper[4835]: E0216 15:27:44.803200 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d\": container with ID starting with 8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d not found: ID does not exist" containerID="8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.803244 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d"} err="failed to get container status \"8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d\": rpc error: code = NotFound desc = could not find container \"8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d\": container with ID starting with 8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d not found: ID does not exist" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.803282 4835 scope.go:117] "RemoveContainer" containerID="4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67" Feb 16 15:27:44 crc kubenswrapper[4835]: E0216 15:27:44.803670 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67\": container with ID starting with 4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67 not found: ID does not exist" containerID="4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.803715 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67"} err="failed to get container status \"4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67\": rpc error: code = NotFound desc = could not find container \"4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67\": container with ID starting with 4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67 not found: ID does not exist" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.803744 4835 scope.go:117] "RemoveContainer" containerID="8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.804059 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d"} err="failed to get container status \"8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d\": rpc error: code = NotFound desc = could not find container \"8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d\": container with ID starting with 8e3f34e880bf52a4e6a72993f93d6de9c9cf285f8ab345576a0f002b9f84434d not found: ID does not exist" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.804083 4835 scope.go:117] "RemoveContainer" containerID="4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.804336 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67"} err="failed to get container status \"4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67\": rpc error: code = NotFound desc = could not find container \"4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67\": container with ID starting with 4b956452fb3dab88c65a9d9952a7cc086353506fc6800ebe0799d0c38301cf67 not found: ID does not exist" Feb 16 15:27:44 crc kubenswrapper[4835]: I0216 15:27:44.817784 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/373e7d99-8fac-412f-a502-ac6ecee3e809-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.038649 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.048941 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.070680 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:27:45 crc kubenswrapper[4835]: E0216 15:27:45.071639 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="373e7d99-8fac-412f-a502-ac6ecee3e809" containerName="nova-metadata-log" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.071652 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="373e7d99-8fac-412f-a502-ac6ecee3e809" containerName="nova-metadata-log" Feb 16 15:27:45 crc kubenswrapper[4835]: E0216 15:27:45.071665 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="373e7d99-8fac-412f-a502-ac6ecee3e809" containerName="nova-metadata-metadata" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.071672 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="373e7d99-8fac-412f-a502-ac6ecee3e809" containerName="nova-metadata-metadata" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.072390 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="373e7d99-8fac-412f-a502-ac6ecee3e809" containerName="nova-metadata-metadata" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.072433 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="373e7d99-8fac-412f-a502-ac6ecee3e809" containerName="nova-metadata-log" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.076290 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.080386 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.085539 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.110095 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.128648 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " pod="openstack/nova-metadata-0" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.128796 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911747f7-35be-44ec-8997-91f6f629504a-logs\") pod \"nova-metadata-0\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " pod="openstack/nova-metadata-0" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.128881 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-config-data\") pod \"nova-metadata-0\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " pod="openstack/nova-metadata-0" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.128917 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lstbp\" (UniqueName: \"kubernetes.io/projected/911747f7-35be-44ec-8997-91f6f629504a-kube-api-access-lstbp\") pod \"nova-metadata-0\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " pod="openstack/nova-metadata-0" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.128967 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " pod="openstack/nova-metadata-0" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.231185 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911747f7-35be-44ec-8997-91f6f629504a-logs\") pod \"nova-metadata-0\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " pod="openstack/nova-metadata-0" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.231330 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-config-data\") pod \"nova-metadata-0\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " pod="openstack/nova-metadata-0" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.231357 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lstbp\" (UniqueName: \"kubernetes.io/projected/911747f7-35be-44ec-8997-91f6f629504a-kube-api-access-lstbp\") pod \"nova-metadata-0\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " pod="openstack/nova-metadata-0" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.231420 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " pod="openstack/nova-metadata-0" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.231491 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " pod="openstack/nova-metadata-0" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.231622 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911747f7-35be-44ec-8997-91f6f629504a-logs\") pod \"nova-metadata-0\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " pod="openstack/nova-metadata-0" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.236113 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " pod="openstack/nova-metadata-0" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.236997 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " pod="openstack/nova-metadata-0" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.242119 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-config-data\") pod \"nova-metadata-0\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " pod="openstack/nova-metadata-0" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.254172 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lstbp\" (UniqueName: \"kubernetes.io/projected/911747f7-35be-44ec-8997-91f6f629504a-kube-api-access-lstbp\") pod \"nova-metadata-0\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " pod="openstack/nova-metadata-0" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.393821 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="373e7d99-8fac-412f-a502-ac6ecee3e809" path="/var/lib/kubelet/pods/373e7d99-8fac-412f-a502-ac6ecee3e809/volumes" Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.395284 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:27:45 crc kubenswrapper[4835]: W0216 15:27:45.871491 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod911747f7_35be_44ec_8997_91f6f629504a.slice/crio-599733d5eb0597633d35106c68145857de900ede999628a97a2144c56b2f0f11 WatchSource:0}: Error finding container 599733d5eb0597633d35106c68145857de900ede999628a97a2144c56b2f0f11: Status 404 returned error can't find the container with id 599733d5eb0597633d35106c68145857de900ede999628a97a2144c56b2f0f11 Feb 16 15:27:45 crc kubenswrapper[4835]: I0216 15:27:45.884874 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:27:46 crc kubenswrapper[4835]: I0216 15:27:46.723957 4835 generic.go:334] "Generic (PLEG): container finished" podID="d67a4473-b44e-41b6-b975-25f3f4f34ad8" containerID="1df3385d1f34bcbf22783a9e95e43343e0835c13fad43cc54c6a970978a3135f" exitCode=0 Feb 16 15:27:46 crc kubenswrapper[4835]: I0216 15:27:46.724280 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xt45f" event={"ID":"d67a4473-b44e-41b6-b975-25f3f4f34ad8","Type":"ContainerDied","Data":"1df3385d1f34bcbf22783a9e95e43343e0835c13fad43cc54c6a970978a3135f"} Feb 16 15:27:46 crc kubenswrapper[4835]: I0216 15:27:46.729141 4835 generic.go:334] "Generic (PLEG): container finished" podID="81ab2659-cc62-4f55-981e-2f2887d7fbe1" containerID="86487022e8d0a5e138828d7edafbdf59326490463e88e4a30b258ec8a8ee3d78" exitCode=0 Feb 16 15:27:46 crc kubenswrapper[4835]: I0216 15:27:46.729225 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xj8wc" event={"ID":"81ab2659-cc62-4f55-981e-2f2887d7fbe1","Type":"ContainerDied","Data":"86487022e8d0a5e138828d7edafbdf59326490463e88e4a30b258ec8a8ee3d78"} Feb 16 15:27:46 crc kubenswrapper[4835]: I0216 15:27:46.731485 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"911747f7-35be-44ec-8997-91f6f629504a","Type":"ContainerStarted","Data":"e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8"} Feb 16 15:27:46 crc kubenswrapper[4835]: I0216 15:27:46.731556 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"911747f7-35be-44ec-8997-91f6f629504a","Type":"ContainerStarted","Data":"f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893"} Feb 16 15:27:46 crc kubenswrapper[4835]: I0216 15:27:46.731577 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"911747f7-35be-44ec-8997-91f6f629504a","Type":"ContainerStarted","Data":"599733d5eb0597633d35106c68145857de900ede999628a97a2144c56b2f0f11"} Feb 16 15:27:46 crc kubenswrapper[4835]: I0216 15:27:46.776445 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.776427543 podStartE2EDuration="1.776427543s" podCreationTimestamp="2026-02-16 15:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:27:46.761477504 +0000 UTC m=+1216.053470409" watchObservedRunningTime="2026-02-16 15:27:46.776427543 +0000 UTC m=+1216.068420438" Feb 16 15:27:47 crc kubenswrapper[4835]: I0216 15:27:47.942734 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:27:47 crc kubenswrapper[4835]: I0216 15:27:47.943002 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.215595 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.215854 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.280443 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.374960 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xt45f" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.381180 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xj8wc" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.459623 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.509735 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-combined-ca-bundle\") pod \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\" (UID: \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\") " Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.509823 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-config-data\") pod \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\" (UID: \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\") " Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.509941 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv2md\" (UniqueName: \"kubernetes.io/projected/d67a4473-b44e-41b6-b975-25f3f4f34ad8-kube-api-access-qv2md\") pod \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\" (UID: \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\") " Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.510000 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9btv\" (UniqueName: \"kubernetes.io/projected/81ab2659-cc62-4f55-981e-2f2887d7fbe1-kube-api-access-p9btv\") pod \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\" (UID: \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\") " Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.510082 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-scripts\") pod \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\" (UID: \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\") " Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.510132 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-combined-ca-bundle\") pod \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\" (UID: \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\") " Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.510191 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-scripts\") pod \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\" (UID: \"81ab2659-cc62-4f55-981e-2f2887d7fbe1\") " Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.510220 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-config-data\") pod \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\" (UID: \"d67a4473-b44e-41b6-b975-25f3f4f34ad8\") " Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.514575 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.530306 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-scripts" (OuterVolumeSpecName: "scripts") pod "d67a4473-b44e-41b6-b975-25f3f4f34ad8" (UID: "d67a4473-b44e-41b6-b975-25f3f4f34ad8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.530411 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-scripts" (OuterVolumeSpecName: "scripts") pod "81ab2659-cc62-4f55-981e-2f2887d7fbe1" (UID: "81ab2659-cc62-4f55-981e-2f2887d7fbe1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.530705 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d67a4473-b44e-41b6-b975-25f3f4f34ad8-kube-api-access-qv2md" (OuterVolumeSpecName: "kube-api-access-qv2md") pod "d67a4473-b44e-41b6-b975-25f3f4f34ad8" (UID: "d67a4473-b44e-41b6-b975-25f3f4f34ad8"). InnerVolumeSpecName "kube-api-access-qv2md". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.558667 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81ab2659-cc62-4f55-981e-2f2887d7fbe1" (UID: "81ab2659-cc62-4f55-981e-2f2887d7fbe1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.559076 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ab2659-cc62-4f55-981e-2f2887d7fbe1-kube-api-access-p9btv" (OuterVolumeSpecName: "kube-api-access-p9btv") pod "81ab2659-cc62-4f55-981e-2f2887d7fbe1" (UID: "81ab2659-cc62-4f55-981e-2f2887d7fbe1"). InnerVolumeSpecName "kube-api-access-p9btv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.596636 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nx47s"] Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.596858 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" podUID="ac798d12-9bfa-4bbd-b013-a91e06a14507" containerName="dnsmasq-dns" containerID="cri-o://ee536bffe31678bd43ee3c1f890e746fa19bb4cfd94003766757c2afda2ef8fb" gracePeriod=10 Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.613406 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.613431 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.613442 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv2md\" (UniqueName: \"kubernetes.io/projected/d67a4473-b44e-41b6-b975-25f3f4f34ad8-kube-api-access-qv2md\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.613450 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9btv\" (UniqueName: \"kubernetes.io/projected/81ab2659-cc62-4f55-981e-2f2887d7fbe1-kube-api-access-p9btv\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.613459 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.629094 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-config-data" (OuterVolumeSpecName: "config-data") pod "d67a4473-b44e-41b6-b975-25f3f4f34ad8" (UID: "d67a4473-b44e-41b6-b975-25f3f4f34ad8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.630877 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-config-data" (OuterVolumeSpecName: "config-data") pod "81ab2659-cc62-4f55-981e-2f2887d7fbe1" (UID: "81ab2659-cc62-4f55-981e-2f2887d7fbe1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.637598 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d67a4473-b44e-41b6-b975-25f3f4f34ad8" (UID: "d67a4473-b44e-41b6-b975-25f3f4f34ad8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.715966 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.715991 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d67a4473-b44e-41b6-b975-25f3f4f34ad8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.716000 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ab2659-cc62-4f55-981e-2f2887d7fbe1-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.758841 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xj8wc" event={"ID":"81ab2659-cc62-4f55-981e-2f2887d7fbe1","Type":"ContainerDied","Data":"9bb1b90ce197152e6b1f8dfea7c7e65c9d460534cbeb614ff0a5898286b58181"} Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.758875 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bb1b90ce197152e6b1f8dfea7c7e65c9d460534cbeb614ff0a5898286b58181" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.758950 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xj8wc" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.777805 4835 generic.go:334] "Generic (PLEG): container finished" podID="ac798d12-9bfa-4bbd-b013-a91e06a14507" containerID="ee536bffe31678bd43ee3c1f890e746fa19bb4cfd94003766757c2afda2ef8fb" exitCode=0 Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.777870 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" event={"ID":"ac798d12-9bfa-4bbd-b013-a91e06a14507","Type":"ContainerDied","Data":"ee536bffe31678bd43ee3c1f890e746fa19bb4cfd94003766757c2afda2ef8fb"} Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.785398 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xt45f" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.786009 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xt45f" event={"ID":"d67a4473-b44e-41b6-b975-25f3f4f34ad8","Type":"ContainerDied","Data":"531cac6892f97217b376c35597f666b7f56db8fc28d8aff24317a362cc01bf89"} Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.786039 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="531cac6892f97217b376c35597f666b7f56db8fc28d8aff24317a362cc01bf89" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.882591 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 15:27:48 crc kubenswrapper[4835]: E0216 15:27:48.883365 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ab2659-cc62-4f55-981e-2f2887d7fbe1" containerName="nova-cell1-conductor-db-sync" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.883383 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ab2659-cc62-4f55-981e-2f2887d7fbe1" containerName="nova-cell1-conductor-db-sync" Feb 16 15:27:48 crc kubenswrapper[4835]: E0216 15:27:48.883413 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d67a4473-b44e-41b6-b975-25f3f4f34ad8" containerName="nova-manage" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.883419 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d67a4473-b44e-41b6-b975-25f3f4f34ad8" containerName="nova-manage" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.883679 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ab2659-cc62-4f55-981e-2f2887d7fbe1" containerName="nova-cell1-conductor-db-sync" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.883703 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d67a4473-b44e-41b6-b975-25f3f4f34ad8" containerName="nova-manage" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.884468 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.886964 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.890166 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 15:27:48 crc kubenswrapper[4835]: I0216 15:27:48.915860 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.027082 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b52c41-2afa-4ee9-8239-5ffaf418e1f1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"91b52c41-2afa-4ee9-8239-5ffaf418e1f1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.027190 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91b52c41-2afa-4ee9-8239-5ffaf418e1f1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"91b52c41-2afa-4ee9-8239-5ffaf418e1f1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.027305 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wrtd\" (UniqueName: \"kubernetes.io/projected/91b52c41-2afa-4ee9-8239-5ffaf418e1f1-kube-api-access-6wrtd\") pod \"nova-cell1-conductor-0\" (UID: \"91b52c41-2afa-4ee9-8239-5ffaf418e1f1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.030770 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d050dfb9-af7a-4642-be0e-892734cca6e0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.030885 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d050dfb9-af7a-4642-be0e-892734cca6e0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.057374 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.057667 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d050dfb9-af7a-4642-be0e-892734cca6e0" containerName="nova-api-log" containerID="cri-o://7980abf0797529f1347a73692caebe7e8a842b169341592ee93fe3c9820f930d" gracePeriod=30 Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.058092 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d050dfb9-af7a-4642-be0e-892734cca6e0" containerName="nova-api-api" containerID="cri-o://441b5702abc89c3b8a505abbdc4c43d0e9973330631e70208a54432a02559021" gracePeriod=30 Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.062257 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.062539 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="911747f7-35be-44ec-8997-91f6f629504a" containerName="nova-metadata-metadata" containerID="cri-o://e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8" gracePeriod=30 Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.062470 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="911747f7-35be-44ec-8997-91f6f629504a" containerName="nova-metadata-log" containerID="cri-o://f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893" gracePeriod=30 Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.128745 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wrtd\" (UniqueName: \"kubernetes.io/projected/91b52c41-2afa-4ee9-8239-5ffaf418e1f1-kube-api-access-6wrtd\") pod \"nova-cell1-conductor-0\" (UID: \"91b52c41-2afa-4ee9-8239-5ffaf418e1f1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.128832 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b52c41-2afa-4ee9-8239-5ffaf418e1f1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"91b52c41-2afa-4ee9-8239-5ffaf418e1f1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.128881 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91b52c41-2afa-4ee9-8239-5ffaf418e1f1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"91b52c41-2afa-4ee9-8239-5ffaf418e1f1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.132738 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b52c41-2afa-4ee9-8239-5ffaf418e1f1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"91b52c41-2afa-4ee9-8239-5ffaf418e1f1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.140167 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91b52c41-2afa-4ee9-8239-5ffaf418e1f1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"91b52c41-2afa-4ee9-8239-5ffaf418e1f1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.147312 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wrtd\" (UniqueName: \"kubernetes.io/projected/91b52c41-2afa-4ee9-8239-5ffaf418e1f1-kube-api-access-6wrtd\") pod \"nova-cell1-conductor-0\" (UID: \"91b52c41-2afa-4ee9-8239-5ffaf418e1f1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.152906 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.216327 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.230808 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-config\") pod \"ac798d12-9bfa-4bbd-b013-a91e06a14507\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.230854 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpp2z\" (UniqueName: \"kubernetes.io/projected/ac798d12-9bfa-4bbd-b013-a91e06a14507-kube-api-access-fpp2z\") pod \"ac798d12-9bfa-4bbd-b013-a91e06a14507\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.230903 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-dns-swift-storage-0\") pod \"ac798d12-9bfa-4bbd-b013-a91e06a14507\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.230998 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-ovsdbserver-nb\") pod \"ac798d12-9bfa-4bbd-b013-a91e06a14507\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.231093 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-dns-svc\") pod \"ac798d12-9bfa-4bbd-b013-a91e06a14507\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.231108 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-ovsdbserver-sb\") pod \"ac798d12-9bfa-4bbd-b013-a91e06a14507\" (UID: \"ac798d12-9bfa-4bbd-b013-a91e06a14507\") " Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.250057 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac798d12-9bfa-4bbd-b013-a91e06a14507-kube-api-access-fpp2z" (OuterVolumeSpecName: "kube-api-access-fpp2z") pod "ac798d12-9bfa-4bbd-b013-a91e06a14507" (UID: "ac798d12-9bfa-4bbd-b013-a91e06a14507"). InnerVolumeSpecName "kube-api-access-fpp2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.297323 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-config" (OuterVolumeSpecName: "config") pod "ac798d12-9bfa-4bbd-b013-a91e06a14507" (UID: "ac798d12-9bfa-4bbd-b013-a91e06a14507"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.317513 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ac798d12-9bfa-4bbd-b013-a91e06a14507" (UID: "ac798d12-9bfa-4bbd-b013-a91e06a14507"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.330044 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac798d12-9bfa-4bbd-b013-a91e06a14507" (UID: "ac798d12-9bfa-4bbd-b013-a91e06a14507"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.333748 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.333776 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.333788 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpp2z\" (UniqueName: \"kubernetes.io/projected/ac798d12-9bfa-4bbd-b013-a91e06a14507-kube-api-access-fpp2z\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.333807 4835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.335311 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac798d12-9bfa-4bbd-b013-a91e06a14507" (UID: "ac798d12-9bfa-4bbd-b013-a91e06a14507"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.337329 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac798d12-9bfa-4bbd-b013-a91e06a14507" (UID: "ac798d12-9bfa-4bbd-b013-a91e06a14507"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.435880 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.436207 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac798d12-9bfa-4bbd-b013-a91e06a14507-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.466187 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.712825 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.819843 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.820019 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-nx47s" event={"ID":"ac798d12-9bfa-4bbd-b013-a91e06a14507","Type":"ContainerDied","Data":"733e3efed305b7b8046caf1f690ab36fe30e9da8933a0bc43854b01929325acf"} Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.820065 4835 scope.go:117] "RemoveContainer" containerID="ee536bffe31678bd43ee3c1f890e746fa19bb4cfd94003766757c2afda2ef8fb" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.823934 4835 generic.go:334] "Generic (PLEG): container finished" podID="911747f7-35be-44ec-8997-91f6f629504a" containerID="e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8" exitCode=0 Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.823959 4835 generic.go:334] "Generic (PLEG): container finished" podID="911747f7-35be-44ec-8997-91f6f629504a" containerID="f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893" exitCode=143 Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.823996 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"911747f7-35be-44ec-8997-91f6f629504a","Type":"ContainerDied","Data":"e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8"} Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.824006 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.824016 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"911747f7-35be-44ec-8997-91f6f629504a","Type":"ContainerDied","Data":"f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893"} Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.824025 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"911747f7-35be-44ec-8997-91f6f629504a","Type":"ContainerDied","Data":"599733d5eb0597633d35106c68145857de900ede999628a97a2144c56b2f0f11"} Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.841777 4835 generic.go:334] "Generic (PLEG): container finished" podID="d050dfb9-af7a-4642-be0e-892734cca6e0" containerID="7980abf0797529f1347a73692caebe7e8a842b169341592ee93fe3c9820f930d" exitCode=143 Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.842354 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d050dfb9-af7a-4642-be0e-892734cca6e0","Type":"ContainerDied","Data":"7980abf0797529f1347a73692caebe7e8a842b169341592ee93fe3c9820f930d"} Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.842821 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-config-data\") pod \"911747f7-35be-44ec-8997-91f6f629504a\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.842870 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-combined-ca-bundle\") pod \"911747f7-35be-44ec-8997-91f6f629504a\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.842905 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-nova-metadata-tls-certs\") pod \"911747f7-35be-44ec-8997-91f6f629504a\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.842983 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911747f7-35be-44ec-8997-91f6f629504a-logs\") pod \"911747f7-35be-44ec-8997-91f6f629504a\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.843042 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lstbp\" (UniqueName: \"kubernetes.io/projected/911747f7-35be-44ec-8997-91f6f629504a-kube-api-access-lstbp\") pod \"911747f7-35be-44ec-8997-91f6f629504a\" (UID: \"911747f7-35be-44ec-8997-91f6f629504a\") " Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.849937 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911747f7-35be-44ec-8997-91f6f629504a-logs" (OuterVolumeSpecName: "logs") pod "911747f7-35be-44ec-8997-91f6f629504a" (UID: "911747f7-35be-44ec-8997-91f6f629504a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.850977 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911747f7-35be-44ec-8997-91f6f629504a-kube-api-access-lstbp" (OuterVolumeSpecName: "kube-api-access-lstbp") pod "911747f7-35be-44ec-8997-91f6f629504a" (UID: "911747f7-35be-44ec-8997-91f6f629504a"). InnerVolumeSpecName "kube-api-access-lstbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.855044 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nx47s"] Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.862770 4835 scope.go:117] "RemoveContainer" containerID="fdde2115359dfdad633ab90d25431e49284a54bd672ee2b4d3722a753d4d8e11" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.866854 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-nx47s"] Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.881997 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "911747f7-35be-44ec-8997-91f6f629504a" (UID: "911747f7-35be-44ec-8997-91f6f629504a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.889712 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-config-data" (OuterVolumeSpecName: "config-data") pod "911747f7-35be-44ec-8997-91f6f629504a" (UID: "911747f7-35be-44ec-8997-91f6f629504a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.903912 4835 scope.go:117] "RemoveContainer" containerID="e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.909258 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 15:27:49 crc kubenswrapper[4835]: W0216 15:27:49.909629 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91b52c41_2afa_4ee9_8239_5ffaf418e1f1.slice/crio-000106c2e08ed7cf63a287ab391157e6b9745470667970c2f079848aaee790f8 WatchSource:0}: Error finding container 000106c2e08ed7cf63a287ab391157e6b9745470667970c2f079848aaee790f8: Status 404 returned error can't find the container with id 000106c2e08ed7cf63a287ab391157e6b9745470667970c2f079848aaee790f8 Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.912245 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "911747f7-35be-44ec-8997-91f6f629504a" (UID: "911747f7-35be-44ec-8997-91f6f629504a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.927617 4835 scope.go:117] "RemoveContainer" containerID="f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.948050 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.948102 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.948114 4835 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/911747f7-35be-44ec-8997-91f6f629504a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.948125 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911747f7-35be-44ec-8997-91f6f629504a-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.948134 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lstbp\" (UniqueName: \"kubernetes.io/projected/911747f7-35be-44ec-8997-91f6f629504a-kube-api-access-lstbp\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.948775 4835 scope.go:117] "RemoveContainer" containerID="e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8" Feb 16 15:27:49 crc kubenswrapper[4835]: E0216 15:27:49.949165 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8\": container with ID starting with e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8 not found: ID does not exist" containerID="e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.949251 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8"} err="failed to get container status \"e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8\": rpc error: code = NotFound desc = could not find container \"e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8\": container with ID starting with e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8 not found: ID does not exist" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.949286 4835 scope.go:117] "RemoveContainer" containerID="f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893" Feb 16 15:27:49 crc kubenswrapper[4835]: E0216 15:27:49.949609 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893\": container with ID starting with f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893 not found: ID does not exist" containerID="f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.949638 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893"} err="failed to get container status \"f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893\": rpc error: code = NotFound desc = could not find container \"f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893\": container with ID starting with f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893 not found: ID does not exist" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.949656 4835 scope.go:117] "RemoveContainer" containerID="e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.950144 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8"} err="failed to get container status \"e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8\": rpc error: code = NotFound desc = could not find container \"e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8\": container with ID starting with e208df1c8bbb2d2c07145edaa2c548627df8dc9ca68dd83bd22ea4a0561806f8 not found: ID does not exist" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.950171 4835 scope.go:117] "RemoveContainer" containerID="f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893" Feb 16 15:27:49 crc kubenswrapper[4835]: I0216 15:27:49.950497 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893"} err="failed to get container status \"f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893\": rpc error: code = NotFound desc = could not find container \"f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893\": container with ID starting with f343f2d12ef1991ea6667410e4af81752d76e19ff4022672a01a78d038bff893 not found: ID does not exist" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.157715 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.169308 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.177726 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:27:50 crc kubenswrapper[4835]: E0216 15:27:50.178139 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911747f7-35be-44ec-8997-91f6f629504a" containerName="nova-metadata-metadata" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.178155 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="911747f7-35be-44ec-8997-91f6f629504a" containerName="nova-metadata-metadata" Feb 16 15:27:50 crc kubenswrapper[4835]: E0216 15:27:50.178170 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911747f7-35be-44ec-8997-91f6f629504a" containerName="nova-metadata-log" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.178177 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="911747f7-35be-44ec-8997-91f6f629504a" containerName="nova-metadata-log" Feb 16 15:27:50 crc kubenswrapper[4835]: E0216 15:27:50.178190 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac798d12-9bfa-4bbd-b013-a91e06a14507" containerName="dnsmasq-dns" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.178196 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac798d12-9bfa-4bbd-b013-a91e06a14507" containerName="dnsmasq-dns" Feb 16 15:27:50 crc kubenswrapper[4835]: E0216 15:27:50.178222 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac798d12-9bfa-4bbd-b013-a91e06a14507" containerName="init" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.178229 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac798d12-9bfa-4bbd-b013-a91e06a14507" containerName="init" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.178424 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="911747f7-35be-44ec-8997-91f6f629504a" containerName="nova-metadata-metadata" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.178443 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac798d12-9bfa-4bbd-b013-a91e06a14507" containerName="dnsmasq-dns" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.178461 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="911747f7-35be-44ec-8997-91f6f629504a" containerName="nova-metadata-log" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.179619 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.181268 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.181655 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.203588 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.254276 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " pod="openstack/nova-metadata-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.254688 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-logs\") pod \"nova-metadata-0\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " pod="openstack/nova-metadata-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.254751 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " pod="openstack/nova-metadata-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.254968 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-config-data\") pod \"nova-metadata-0\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " pod="openstack/nova-metadata-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.255117 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq8zl\" (UniqueName: \"kubernetes.io/projected/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-kube-api-access-fq8zl\") pod \"nova-metadata-0\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " pod="openstack/nova-metadata-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.357406 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq8zl\" (UniqueName: \"kubernetes.io/projected/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-kube-api-access-fq8zl\") pod \"nova-metadata-0\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " pod="openstack/nova-metadata-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.357478 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " pod="openstack/nova-metadata-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.357554 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-logs\") pod \"nova-metadata-0\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " pod="openstack/nova-metadata-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.357582 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " pod="openstack/nova-metadata-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.357642 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-config-data\") pod \"nova-metadata-0\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " pod="openstack/nova-metadata-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.360107 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-logs\") pod \"nova-metadata-0\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " pod="openstack/nova-metadata-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.363359 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " pod="openstack/nova-metadata-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.370318 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " pod="openstack/nova-metadata-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.370664 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-config-data\") pod \"nova-metadata-0\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " pod="openstack/nova-metadata-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.383121 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq8zl\" (UniqueName: \"kubernetes.io/projected/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-kube-api-access-fq8zl\") pod \"nova-metadata-0\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " pod="openstack/nova-metadata-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.564119 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.904841 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"91b52c41-2afa-4ee9-8239-5ffaf418e1f1","Type":"ContainerStarted","Data":"7bca487b763d1eb53bb52c5ad31272efbba6d3a2b8851605343c5d532956cff2"} Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.905149 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"91b52c41-2afa-4ee9-8239-5ffaf418e1f1","Type":"ContainerStarted","Data":"000106c2e08ed7cf63a287ab391157e6b9745470667970c2f079848aaee790f8"} Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.906507 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 15:27:50 crc kubenswrapper[4835]: I0216 15:27:50.940882 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b3694d0f-1549-4763-9eb8-b91775af1371" containerName="nova-scheduler-scheduler" containerID="cri-o://89b86e2aec2f6215a37389e9369d98afafd6b730623613b7bdc397ceb1c33d8c" gracePeriod=30 Feb 16 15:27:51 crc kubenswrapper[4835]: I0216 15:27:51.077398 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.077383112 podStartE2EDuration="3.077383112s" podCreationTimestamp="2026-02-16 15:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:27:50.938949408 +0000 UTC m=+1220.230942303" watchObservedRunningTime="2026-02-16 15:27:51.077383112 +0000 UTC m=+1220.369376007" Feb 16 15:27:51 crc kubenswrapper[4835]: I0216 15:27:51.085955 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:27:51 crc kubenswrapper[4835]: I0216 15:27:51.391683 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="911747f7-35be-44ec-8997-91f6f629504a" path="/var/lib/kubelet/pods/911747f7-35be-44ec-8997-91f6f629504a/volumes" Feb 16 15:27:51 crc kubenswrapper[4835]: I0216 15:27:51.392592 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac798d12-9bfa-4bbd-b013-a91e06a14507" path="/var/lib/kubelet/pods/ac798d12-9bfa-4bbd-b013-a91e06a14507/volumes" Feb 16 15:27:51 crc kubenswrapper[4835]: I0216 15:27:51.950520 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6","Type":"ContainerStarted","Data":"601ef8234e156a07ea717260fea7caebf814fbafb0a6536a19fa3c18d04d71d3"} Feb 16 15:27:51 crc kubenswrapper[4835]: I0216 15:27:51.950632 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6","Type":"ContainerStarted","Data":"fd8bdc5408cc076b9ef549659ce0bf17bbccc0320c999e4e4f1aa47855d60d34"} Feb 16 15:27:51 crc kubenswrapper[4835]: I0216 15:27:51.950643 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6","Type":"ContainerStarted","Data":"38ef5bcef4698514390792c5d871dd01e8684173c8741dad6434a6bc173eea6e"} Feb 16 15:27:51 crc kubenswrapper[4835]: I0216 15:27:51.975330 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.9753146579999998 podStartE2EDuration="1.975314658s" podCreationTimestamp="2026-02-16 15:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:27:51.971045117 +0000 UTC m=+1221.263038012" watchObservedRunningTime="2026-02-16 15:27:51.975314658 +0000 UTC m=+1221.267307553" Feb 16 15:27:53 crc kubenswrapper[4835]: E0216 15:27:53.217961 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="89b86e2aec2f6215a37389e9369d98afafd6b730623613b7bdc397ceb1c33d8c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:27:53 crc kubenswrapper[4835]: E0216 15:27:53.219881 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="89b86e2aec2f6215a37389e9369d98afafd6b730623613b7bdc397ceb1c33d8c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:27:53 crc kubenswrapper[4835]: E0216 15:27:53.221589 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="89b86e2aec2f6215a37389e9369d98afafd6b730623613b7bdc397ceb1c33d8c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:27:53 crc kubenswrapper[4835]: E0216 15:27:53.221620 4835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b3694d0f-1549-4763-9eb8-b91775af1371" containerName="nova-scheduler-scheduler" Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.762640 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.854989 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3694d0f-1549-4763-9eb8-b91775af1371-combined-ca-bundle\") pod \"b3694d0f-1549-4763-9eb8-b91775af1371\" (UID: \"b3694d0f-1549-4763-9eb8-b91775af1371\") " Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.855912 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3694d0f-1549-4763-9eb8-b91775af1371-config-data\") pod \"b3694d0f-1549-4763-9eb8-b91775af1371\" (UID: \"b3694d0f-1549-4763-9eb8-b91775af1371\") " Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.856334 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn2rc\" (UniqueName: \"kubernetes.io/projected/b3694d0f-1549-4763-9eb8-b91775af1371-kube-api-access-cn2rc\") pod \"b3694d0f-1549-4763-9eb8-b91775af1371\" (UID: \"b3694d0f-1549-4763-9eb8-b91775af1371\") " Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.886959 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3694d0f-1549-4763-9eb8-b91775af1371-kube-api-access-cn2rc" (OuterVolumeSpecName: "kube-api-access-cn2rc") pod "b3694d0f-1549-4763-9eb8-b91775af1371" (UID: "b3694d0f-1549-4763-9eb8-b91775af1371"). InnerVolumeSpecName "kube-api-access-cn2rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.888980 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3694d0f-1549-4763-9eb8-b91775af1371-config-data" (OuterVolumeSpecName: "config-data") pod "b3694d0f-1549-4763-9eb8-b91775af1371" (UID: "b3694d0f-1549-4763-9eb8-b91775af1371"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.891277 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3694d0f-1549-4763-9eb8-b91775af1371-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3694d0f-1549-4763-9eb8-b91775af1371" (UID: "b3694d0f-1549-4763-9eb8-b91775af1371"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.892406 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.958386 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d050dfb9-af7a-4642-be0e-892734cca6e0-config-data\") pod \"d050dfb9-af7a-4642-be0e-892734cca6e0\" (UID: \"d050dfb9-af7a-4642-be0e-892734cca6e0\") " Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.958475 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72v9h\" (UniqueName: \"kubernetes.io/projected/d050dfb9-af7a-4642-be0e-892734cca6e0-kube-api-access-72v9h\") pod \"d050dfb9-af7a-4642-be0e-892734cca6e0\" (UID: \"d050dfb9-af7a-4642-be0e-892734cca6e0\") " Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.958635 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d050dfb9-af7a-4642-be0e-892734cca6e0-logs\") pod \"d050dfb9-af7a-4642-be0e-892734cca6e0\" (UID: \"d050dfb9-af7a-4642-be0e-892734cca6e0\") " Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.958725 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d050dfb9-af7a-4642-be0e-892734cca6e0-combined-ca-bundle\") pod \"d050dfb9-af7a-4642-be0e-892734cca6e0\" (UID: \"d050dfb9-af7a-4642-be0e-892734cca6e0\") " Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.959178 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn2rc\" (UniqueName: \"kubernetes.io/projected/b3694d0f-1549-4763-9eb8-b91775af1371-kube-api-access-cn2rc\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.959193 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3694d0f-1549-4763-9eb8-b91775af1371-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.959203 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3694d0f-1549-4763-9eb8-b91775af1371-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.959874 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d050dfb9-af7a-4642-be0e-892734cca6e0-logs" (OuterVolumeSpecName: "logs") pod "d050dfb9-af7a-4642-be0e-892734cca6e0" (UID: "d050dfb9-af7a-4642-be0e-892734cca6e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.961310 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d050dfb9-af7a-4642-be0e-892734cca6e0-kube-api-access-72v9h" (OuterVolumeSpecName: "kube-api-access-72v9h") pod "d050dfb9-af7a-4642-be0e-892734cca6e0" (UID: "d050dfb9-af7a-4642-be0e-892734cca6e0"). InnerVolumeSpecName "kube-api-access-72v9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.986747 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d050dfb9-af7a-4642-be0e-892734cca6e0-config-data" (OuterVolumeSpecName: "config-data") pod "d050dfb9-af7a-4642-be0e-892734cca6e0" (UID: "d050dfb9-af7a-4642-be0e-892734cca6e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.989228 4835 generic.go:334] "Generic (PLEG): container finished" podID="b3694d0f-1549-4763-9eb8-b91775af1371" containerID="89b86e2aec2f6215a37389e9369d98afafd6b730623613b7bdc397ceb1c33d8c" exitCode=0 Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.989297 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.989310 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b3694d0f-1549-4763-9eb8-b91775af1371","Type":"ContainerDied","Data":"89b86e2aec2f6215a37389e9369d98afafd6b730623613b7bdc397ceb1c33d8c"} Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.989342 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b3694d0f-1549-4763-9eb8-b91775af1371","Type":"ContainerDied","Data":"d12d7095bdd0a36f3e4278c05dc6f1808747ab1ab8082bfcdde68170f66140bf"} Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.989364 4835 scope.go:117] "RemoveContainer" containerID="89b86e2aec2f6215a37389e9369d98afafd6b730623613b7bdc397ceb1c33d8c" Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.993993 4835 generic.go:334] "Generic (PLEG): container finished" podID="d050dfb9-af7a-4642-be0e-892734cca6e0" containerID="441b5702abc89c3b8a505abbdc4c43d0e9973330631e70208a54432a02559021" exitCode=0 Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.994020 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d050dfb9-af7a-4642-be0e-892734cca6e0","Type":"ContainerDied","Data":"441b5702abc89c3b8a505abbdc4c43d0e9973330631e70208a54432a02559021"} Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.994064 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d050dfb9-af7a-4642-be0e-892734cca6e0","Type":"ContainerDied","Data":"a31b280d38bffc0a4be3e9acf31c91e4d20e6b2bf299be0bd8284e5456a77769"} Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.994326 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:27:54 crc kubenswrapper[4835]: I0216 15:27:54.994763 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d050dfb9-af7a-4642-be0e-892734cca6e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d050dfb9-af7a-4642-be0e-892734cca6e0" (UID: "d050dfb9-af7a-4642-be0e-892734cca6e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.022938 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.024245 4835 scope.go:117] "RemoveContainer" containerID="89b86e2aec2f6215a37389e9369d98afafd6b730623613b7bdc397ceb1c33d8c" Feb 16 15:27:55 crc kubenswrapper[4835]: E0216 15:27:55.024673 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89b86e2aec2f6215a37389e9369d98afafd6b730623613b7bdc397ceb1c33d8c\": container with ID starting with 89b86e2aec2f6215a37389e9369d98afafd6b730623613b7bdc397ceb1c33d8c not found: ID does not exist" containerID="89b86e2aec2f6215a37389e9369d98afafd6b730623613b7bdc397ceb1c33d8c" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.024780 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89b86e2aec2f6215a37389e9369d98afafd6b730623613b7bdc397ceb1c33d8c"} err="failed to get container status \"89b86e2aec2f6215a37389e9369d98afafd6b730623613b7bdc397ceb1c33d8c\": rpc error: code = NotFound desc = could not find container \"89b86e2aec2f6215a37389e9369d98afafd6b730623613b7bdc397ceb1c33d8c\": container with ID starting with 89b86e2aec2f6215a37389e9369d98afafd6b730623613b7bdc397ceb1c33d8c not found: ID does not exist" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.024874 4835 scope.go:117] "RemoveContainer" containerID="441b5702abc89c3b8a505abbdc4c43d0e9973330631e70208a54432a02559021" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.033705 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.043124 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:27:55 crc kubenswrapper[4835]: E0216 15:27:55.043613 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3694d0f-1549-4763-9eb8-b91775af1371" containerName="nova-scheduler-scheduler" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.043631 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3694d0f-1549-4763-9eb8-b91775af1371" containerName="nova-scheduler-scheduler" Feb 16 15:27:55 crc kubenswrapper[4835]: E0216 15:27:55.043666 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d050dfb9-af7a-4642-be0e-892734cca6e0" containerName="nova-api-api" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.043673 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d050dfb9-af7a-4642-be0e-892734cca6e0" containerName="nova-api-api" Feb 16 15:27:55 crc kubenswrapper[4835]: E0216 15:27:55.043681 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d050dfb9-af7a-4642-be0e-892734cca6e0" containerName="nova-api-log" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.043687 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d050dfb9-af7a-4642-be0e-892734cca6e0" containerName="nova-api-log" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.043874 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d050dfb9-af7a-4642-be0e-892734cca6e0" containerName="nova-api-log" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.043894 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d050dfb9-af7a-4642-be0e-892734cca6e0" containerName="nova-api-api" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.043909 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3694d0f-1549-4763-9eb8-b91775af1371" containerName="nova-scheduler-scheduler" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.044910 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.052495 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.054237 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.060659 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d050dfb9-af7a-4642-be0e-892734cca6e0-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.060686 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d050dfb9-af7a-4642-be0e-892734cca6e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.060695 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d050dfb9-af7a-4642-be0e-892734cca6e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.060704 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72v9h\" (UniqueName: \"kubernetes.io/projected/d050dfb9-af7a-4642-be0e-892734cca6e0-kube-api-access-72v9h\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.069045 4835 scope.go:117] "RemoveContainer" containerID="7980abf0797529f1347a73692caebe7e8a842b169341592ee93fe3c9820f930d" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.090656 4835 scope.go:117] "RemoveContainer" containerID="441b5702abc89c3b8a505abbdc4c43d0e9973330631e70208a54432a02559021" Feb 16 15:27:55 crc kubenswrapper[4835]: E0216 15:27:55.091248 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"441b5702abc89c3b8a505abbdc4c43d0e9973330631e70208a54432a02559021\": container with ID starting with 441b5702abc89c3b8a505abbdc4c43d0e9973330631e70208a54432a02559021 not found: ID does not exist" containerID="441b5702abc89c3b8a505abbdc4c43d0e9973330631e70208a54432a02559021" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.091297 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"441b5702abc89c3b8a505abbdc4c43d0e9973330631e70208a54432a02559021"} err="failed to get container status \"441b5702abc89c3b8a505abbdc4c43d0e9973330631e70208a54432a02559021\": rpc error: code = NotFound desc = could not find container \"441b5702abc89c3b8a505abbdc4c43d0e9973330631e70208a54432a02559021\": container with ID starting with 441b5702abc89c3b8a505abbdc4c43d0e9973330631e70208a54432a02559021 not found: ID does not exist" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.091326 4835 scope.go:117] "RemoveContainer" containerID="7980abf0797529f1347a73692caebe7e8a842b169341592ee93fe3c9820f930d" Feb 16 15:27:55 crc kubenswrapper[4835]: E0216 15:27:55.091745 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7980abf0797529f1347a73692caebe7e8a842b169341592ee93fe3c9820f930d\": container with ID starting with 7980abf0797529f1347a73692caebe7e8a842b169341592ee93fe3c9820f930d not found: ID does not exist" containerID="7980abf0797529f1347a73692caebe7e8a842b169341592ee93fe3c9820f930d" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.091771 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7980abf0797529f1347a73692caebe7e8a842b169341592ee93fe3c9820f930d"} err="failed to get container status \"7980abf0797529f1347a73692caebe7e8a842b169341592ee93fe3c9820f930d\": rpc error: code = NotFound desc = could not find container \"7980abf0797529f1347a73692caebe7e8a842b169341592ee93fe3c9820f930d\": container with ID starting with 7980abf0797529f1347a73692caebe7e8a842b169341592ee93fe3c9820f930d not found: ID does not exist" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.163321 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72daa885-8d90-44e7-af41-31467c9e0643-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"72daa885-8d90-44e7-af41-31467c9e0643\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.163517 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72daa885-8d90-44e7-af41-31467c9e0643-config-data\") pod \"nova-scheduler-0\" (UID: \"72daa885-8d90-44e7-af41-31467c9e0643\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.163868 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzhnb\" (UniqueName: \"kubernetes.io/projected/72daa885-8d90-44e7-af41-31467c9e0643-kube-api-access-wzhnb\") pod \"nova-scheduler-0\" (UID: \"72daa885-8d90-44e7-af41-31467c9e0643\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.266382 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72daa885-8d90-44e7-af41-31467c9e0643-config-data\") pod \"nova-scheduler-0\" (UID: \"72daa885-8d90-44e7-af41-31467c9e0643\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.266664 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzhnb\" (UniqueName: \"kubernetes.io/projected/72daa885-8d90-44e7-af41-31467c9e0643-kube-api-access-wzhnb\") pod \"nova-scheduler-0\" (UID: \"72daa885-8d90-44e7-af41-31467c9e0643\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.266890 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72daa885-8d90-44e7-af41-31467c9e0643-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"72daa885-8d90-44e7-af41-31467c9e0643\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.270446 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72daa885-8d90-44e7-af41-31467c9e0643-config-data\") pod \"nova-scheduler-0\" (UID: \"72daa885-8d90-44e7-af41-31467c9e0643\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.271597 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72daa885-8d90-44e7-af41-31467c9e0643-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"72daa885-8d90-44e7-af41-31467c9e0643\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.282992 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzhnb\" (UniqueName: \"kubernetes.io/projected/72daa885-8d90-44e7-af41-31467c9e0643-kube-api-access-wzhnb\") pod \"nova-scheduler-0\" (UID: \"72daa885-8d90-44e7-af41-31467c9e0643\") " pod="openstack/nova-scheduler-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.370132 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.376807 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:27:55 crc kubenswrapper[4835]: E0216 15:27:55.382640 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.402191 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3694d0f-1549-4763-9eb8-b91775af1371" path="/var/lib/kubelet/pods/b3694d0f-1549-4763-9eb8-b91775af1371/volumes" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.402744 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.425029 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.475731 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.496655 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.511178 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.564595 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.564647 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.583515 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-config-data\") pod \"nova-api-0\" (UID: \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\") " pod="openstack/nova-api-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.583624 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-logs\") pod \"nova-api-0\" (UID: \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\") " pod="openstack/nova-api-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.583670 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7cff\" (UniqueName: \"kubernetes.io/projected/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-kube-api-access-d7cff\") pod \"nova-api-0\" (UID: \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\") " pod="openstack/nova-api-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.583699 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\") " pod="openstack/nova-api-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.686341 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-config-data\") pod \"nova-api-0\" (UID: \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\") " pod="openstack/nova-api-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.686466 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-logs\") pod \"nova-api-0\" (UID: \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\") " pod="openstack/nova-api-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.686565 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7cff\" (UniqueName: \"kubernetes.io/projected/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-kube-api-access-d7cff\") pod \"nova-api-0\" (UID: \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\") " pod="openstack/nova-api-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.686624 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\") " pod="openstack/nova-api-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.687173 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-logs\") pod \"nova-api-0\" (UID: \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\") " pod="openstack/nova-api-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.693595 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-config-data\") pod \"nova-api-0\" (UID: \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\") " pod="openstack/nova-api-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.699092 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\") " pod="openstack/nova-api-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.705012 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7cff\" (UniqueName: \"kubernetes.io/projected/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-kube-api-access-d7cff\") pod \"nova-api-0\" (UID: \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\") " pod="openstack/nova-api-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.804627 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:27:55 crc kubenswrapper[4835]: I0216 15:27:55.880306 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:27:55 crc kubenswrapper[4835]: W0216 15:27:55.881313 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72daa885_8d90_44e7_af41_31467c9e0643.slice/crio-67eeae2df0d4dea4823e97c6b558264a4d2a6a0364d26221bfe04505e1e2ed1b WatchSource:0}: Error finding container 67eeae2df0d4dea4823e97c6b558264a4d2a6a0364d26221bfe04505e1e2ed1b: Status 404 returned error can't find the container with id 67eeae2df0d4dea4823e97c6b558264a4d2a6a0364d26221bfe04505e1e2ed1b Feb 16 15:27:56 crc kubenswrapper[4835]: I0216 15:27:56.004977 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"72daa885-8d90-44e7-af41-31467c9e0643","Type":"ContainerStarted","Data":"67eeae2df0d4dea4823e97c6b558264a4d2a6a0364d26221bfe04505e1e2ed1b"} Feb 16 15:27:56 crc kubenswrapper[4835]: W0216 15:27:56.259348 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9177a8c2_3c0c_4194_8abe_5e7796bf7e64.slice/crio-ebbad9c532ef1536692a01b4d41634d79db881ec5003f7d0a9ebd51b4bd43f54 WatchSource:0}: Error finding container ebbad9c532ef1536692a01b4d41634d79db881ec5003f7d0a9ebd51b4bd43f54: Status 404 returned error can't find the container with id ebbad9c532ef1536692a01b4d41634d79db881ec5003f7d0a9ebd51b4bd43f54 Feb 16 15:27:56 crc kubenswrapper[4835]: I0216 15:27:56.262233 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:27:57 crc kubenswrapper[4835]: I0216 15:27:57.022048 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9177a8c2-3c0c-4194-8abe-5e7796bf7e64","Type":"ContainerStarted","Data":"2a72be23b5f358e45a960403cee9bf4150678de26b59a664d676e2b69e5ad9f7"} Feb 16 15:27:57 crc kubenswrapper[4835]: I0216 15:27:57.022740 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9177a8c2-3c0c-4194-8abe-5e7796bf7e64","Type":"ContainerStarted","Data":"049fd24aa8c50e83e0a60b20790a8f7725481d8caf00413432237ce38cfdba47"} Feb 16 15:27:57 crc kubenswrapper[4835]: I0216 15:27:57.022763 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9177a8c2-3c0c-4194-8abe-5e7796bf7e64","Type":"ContainerStarted","Data":"ebbad9c532ef1536692a01b4d41634d79db881ec5003f7d0a9ebd51b4bd43f54"} Feb 16 15:27:57 crc kubenswrapper[4835]: I0216 15:27:57.024285 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"72daa885-8d90-44e7-af41-31467c9e0643","Type":"ContainerStarted","Data":"56434d808d9111c94b448ca38f358f94fe1ebf96669135267d6a8c1148933b51"} Feb 16 15:27:57 crc kubenswrapper[4835]: I0216 15:27:57.055153 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.055129373 podStartE2EDuration="2.055129373s" podCreationTimestamp="2026-02-16 15:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:27:57.04196199 +0000 UTC m=+1226.333954915" watchObservedRunningTime="2026-02-16 15:27:57.055129373 +0000 UTC m=+1226.347122278" Feb 16 15:27:57 crc kubenswrapper[4835]: I0216 15:27:57.070859 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.070836972 podStartE2EDuration="2.070836972s" podCreationTimestamp="2026-02-16 15:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:27:57.060113333 +0000 UTC m=+1226.352106228" watchObservedRunningTime="2026-02-16 15:27:57.070836972 +0000 UTC m=+1226.362829897" Feb 16 15:27:57 crc kubenswrapper[4835]: I0216 15:27:57.389441 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d050dfb9-af7a-4642-be0e-892734cca6e0" path="/var/lib/kubelet/pods/d050dfb9-af7a-4642-be0e-892734cca6e0/volumes" Feb 16 15:27:57 crc kubenswrapper[4835]: I0216 15:27:57.831119 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 15:27:59 crc kubenswrapper[4835]: I0216 15:27:59.266066 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 15:28:00 crc kubenswrapper[4835]: I0216 15:28:00.371208 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 15:28:00 crc kubenswrapper[4835]: I0216 15:28:00.564537 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 15:28:00 crc kubenswrapper[4835]: I0216 15:28:00.564581 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 15:28:01 crc kubenswrapper[4835]: I0216 15:28:01.610875 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:28:01 crc kubenswrapper[4835]: I0216 15:28:01.610873 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:28:01 crc kubenswrapper[4835]: I0216 15:28:01.717053 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:28:01 crc kubenswrapper[4835]: I0216 15:28:01.717265 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="117011cd-1ad8-4aff-b5d4-49bce3381f02" containerName="kube-state-metrics" containerID="cri-o://0bdae44239bea6763e5e8c6d3b9acb4c41c127b25e622e89284f7da3374d5f42" gracePeriod=30 Feb 16 15:28:02 crc kubenswrapper[4835]: I0216 15:28:02.075588 4835 generic.go:334] "Generic (PLEG): container finished" podID="117011cd-1ad8-4aff-b5d4-49bce3381f02" containerID="0bdae44239bea6763e5e8c6d3b9acb4c41c127b25e622e89284f7da3374d5f42" exitCode=2 Feb 16 15:28:02 crc kubenswrapper[4835]: I0216 15:28:02.076130 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"117011cd-1ad8-4aff-b5d4-49bce3381f02","Type":"ContainerDied","Data":"0bdae44239bea6763e5e8c6d3b9acb4c41c127b25e622e89284f7da3374d5f42"} Feb 16 15:28:02 crc kubenswrapper[4835]: I0216 15:28:02.315666 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:28:02 crc kubenswrapper[4835]: I0216 15:28:02.416410 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6tfd\" (UniqueName: \"kubernetes.io/projected/117011cd-1ad8-4aff-b5d4-49bce3381f02-kube-api-access-x6tfd\") pod \"117011cd-1ad8-4aff-b5d4-49bce3381f02\" (UID: \"117011cd-1ad8-4aff-b5d4-49bce3381f02\") " Feb 16 15:28:02 crc kubenswrapper[4835]: I0216 15:28:02.427829 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/117011cd-1ad8-4aff-b5d4-49bce3381f02-kube-api-access-x6tfd" (OuterVolumeSpecName: "kube-api-access-x6tfd") pod "117011cd-1ad8-4aff-b5d4-49bce3381f02" (UID: "117011cd-1ad8-4aff-b5d4-49bce3381f02"). InnerVolumeSpecName "kube-api-access-x6tfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:28:02 crc kubenswrapper[4835]: I0216 15:28:02.520692 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6tfd\" (UniqueName: \"kubernetes.io/projected/117011cd-1ad8-4aff-b5d4-49bce3381f02-kube-api-access-x6tfd\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.090322 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"117011cd-1ad8-4aff-b5d4-49bce3381f02","Type":"ContainerDied","Data":"491b805512d2083b15d40277355653fe73b4602a770df00607d66ed1daff2a50"} Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.090384 4835 scope.go:117] "RemoveContainer" containerID="0bdae44239bea6763e5e8c6d3b9acb4c41c127b25e622e89284f7da3374d5f42" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.090450 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.143293 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.167255 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.167311 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:28:03 crc kubenswrapper[4835]: E0216 15:28:03.167671 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="117011cd-1ad8-4aff-b5d4-49bce3381f02" containerName="kube-state-metrics" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.167683 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="117011cd-1ad8-4aff-b5d4-49bce3381f02" containerName="kube-state-metrics" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.167879 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="117011cd-1ad8-4aff-b5d4-49bce3381f02" containerName="kube-state-metrics" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.168649 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.194140 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.194460 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.216133 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.340836 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9fce61c-9bbc-46df-9441-890185c4c526-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f9fce61c-9bbc-46df-9441-890185c4c526\") " pod="openstack/kube-state-metrics-0" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.341176 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f9fce61c-9bbc-46df-9441-890185c4c526-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f9fce61c-9bbc-46df-9441-890185c4c526\") " pod="openstack/kube-state-metrics-0" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.341369 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9fce61c-9bbc-46df-9441-890185c4c526-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f9fce61c-9bbc-46df-9441-890185c4c526\") " pod="openstack/kube-state-metrics-0" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.341392 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbnl9\" (UniqueName: \"kubernetes.io/projected/f9fce61c-9bbc-46df-9441-890185c4c526-kube-api-access-lbnl9\") pod \"kube-state-metrics-0\" (UID: \"f9fce61c-9bbc-46df-9441-890185c4c526\") " pod="openstack/kube-state-metrics-0" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.390192 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="117011cd-1ad8-4aff-b5d4-49bce3381f02" path="/var/lib/kubelet/pods/117011cd-1ad8-4aff-b5d4-49bce3381f02/volumes" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.442846 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9fce61c-9bbc-46df-9441-890185c4c526-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f9fce61c-9bbc-46df-9441-890185c4c526\") " pod="openstack/kube-state-metrics-0" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.442883 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbnl9\" (UniqueName: \"kubernetes.io/projected/f9fce61c-9bbc-46df-9441-890185c4c526-kube-api-access-lbnl9\") pod \"kube-state-metrics-0\" (UID: \"f9fce61c-9bbc-46df-9441-890185c4c526\") " pod="openstack/kube-state-metrics-0" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.442934 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9fce61c-9bbc-46df-9441-890185c4c526-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f9fce61c-9bbc-46df-9441-890185c4c526\") " pod="openstack/kube-state-metrics-0" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.442977 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f9fce61c-9bbc-46df-9441-890185c4c526-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f9fce61c-9bbc-46df-9441-890185c4c526\") " pod="openstack/kube-state-metrics-0" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.448656 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f9fce61c-9bbc-46df-9441-890185c4c526-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f9fce61c-9bbc-46df-9441-890185c4c526\") " pod="openstack/kube-state-metrics-0" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.448999 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9fce61c-9bbc-46df-9441-890185c4c526-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f9fce61c-9bbc-46df-9441-890185c4c526\") " pod="openstack/kube-state-metrics-0" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.449120 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9fce61c-9bbc-46df-9441-890185c4c526-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f9fce61c-9bbc-46df-9441-890185c4c526\") " pod="openstack/kube-state-metrics-0" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.463008 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbnl9\" (UniqueName: \"kubernetes.io/projected/f9fce61c-9bbc-46df-9441-890185c4c526-kube-api-access-lbnl9\") pod \"kube-state-metrics-0\" (UID: \"f9fce61c-9bbc-46df-9441-890185c4c526\") " pod="openstack/kube-state-metrics-0" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.508042 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.554202 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.554548 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerName="ceilometer-central-agent" containerID="cri-o://bf8e6ad7e5cfdd2869ada80d40a449bc30c79aaf8684d432c12dda7efa35dc7d" gracePeriod=30 Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.555374 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerName="proxy-httpd" containerID="cri-o://b327fc0893b5e0d6b20fcd988c7c33e5720e7d2a9c8b2a999fb647972a234083" gracePeriod=30 Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.555450 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerName="sg-core" containerID="cri-o://38678d41dc1f3e1a3ab124aae4191b7f9222c455c99356662b67847a389409cf" gracePeriod=30 Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.555501 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerName="ceilometer-notification-agent" containerID="cri-o://785419cb28c19ed24e4b94f1720c9ddeef4e1ed5a18667890a1be9422b07d7cc" gracePeriod=30 Feb 16 15:28:03 crc kubenswrapper[4835]: W0216 15:28:03.990484 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9fce61c_9bbc_46df_9441_890185c4c526.slice/crio-9a61e6c5459c8c94f0644980eaafc6b9c3c4604bc29f8b9bbb34f5921c8cdf8b WatchSource:0}: Error finding container 9a61e6c5459c8c94f0644980eaafc6b9c3c4604bc29f8b9bbb34f5921c8cdf8b: Status 404 returned error can't find the container with id 9a61e6c5459c8c94f0644980eaafc6b9c3c4604bc29f8b9bbb34f5921c8cdf8b Feb 16 15:28:03 crc kubenswrapper[4835]: I0216 15:28:03.993319 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:28:04 crc kubenswrapper[4835]: I0216 15:28:04.101885 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f9fce61c-9bbc-46df-9441-890185c4c526","Type":"ContainerStarted","Data":"9a61e6c5459c8c94f0644980eaafc6b9c3c4604bc29f8b9bbb34f5921c8cdf8b"} Feb 16 15:28:04 crc kubenswrapper[4835]: I0216 15:28:04.106123 4835 generic.go:334] "Generic (PLEG): container finished" podID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerID="b327fc0893b5e0d6b20fcd988c7c33e5720e7d2a9c8b2a999fb647972a234083" exitCode=0 Feb 16 15:28:04 crc kubenswrapper[4835]: I0216 15:28:04.106165 4835 generic.go:334] "Generic (PLEG): container finished" podID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerID="38678d41dc1f3e1a3ab124aae4191b7f9222c455c99356662b67847a389409cf" exitCode=2 Feb 16 15:28:04 crc kubenswrapper[4835]: I0216 15:28:04.106182 4835 generic.go:334] "Generic (PLEG): container finished" podID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerID="bf8e6ad7e5cfdd2869ada80d40a449bc30c79aaf8684d432c12dda7efa35dc7d" exitCode=0 Feb 16 15:28:04 crc kubenswrapper[4835]: I0216 15:28:04.106378 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5858ccbc-3ac8-49ee-88b1-d0c59b89288b","Type":"ContainerDied","Data":"b327fc0893b5e0d6b20fcd988c7c33e5720e7d2a9c8b2a999fb647972a234083"} Feb 16 15:28:04 crc kubenswrapper[4835]: I0216 15:28:04.106429 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5858ccbc-3ac8-49ee-88b1-d0c59b89288b","Type":"ContainerDied","Data":"38678d41dc1f3e1a3ab124aae4191b7f9222c455c99356662b67847a389409cf"} Feb 16 15:28:04 crc kubenswrapper[4835]: I0216 15:28:04.106445 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5858ccbc-3ac8-49ee-88b1-d0c59b89288b","Type":"ContainerDied","Data":"bf8e6ad7e5cfdd2869ada80d40a449bc30c79aaf8684d432c12dda7efa35dc7d"} Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.127225 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f9fce61c-9bbc-46df-9441-890185c4c526","Type":"ContainerStarted","Data":"29da6309cad24dfc1e02a07667c870cbab16f87a12a14bdc5ed3863304508215"} Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.127808 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.153518 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.732517685 podStartE2EDuration="2.153500646s" podCreationTimestamp="2026-02-16 15:28:03 +0000 UTC" firstStartedPulling="2026-02-16 15:28:03.993783641 +0000 UTC m=+1233.285776596" lastFinishedPulling="2026-02-16 15:28:04.414766662 +0000 UTC m=+1233.706759557" observedRunningTime="2026-02-16 15:28:05.146893763 +0000 UTC m=+1234.438886668" watchObservedRunningTime="2026-02-16 15:28:05.153500646 +0000 UTC m=+1234.445493541" Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.370918 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.421519 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.799200 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.805977 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.806031 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.997442 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-sg-core-conf-yaml\") pod \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.997869 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-run-httpd\") pod \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.998184 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-log-httpd\") pod \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.998314 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phgw5\" (UniqueName: \"kubernetes.io/projected/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-kube-api-access-phgw5\") pod \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.998210 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5858ccbc-3ac8-49ee-88b1-d0c59b89288b" (UID: "5858ccbc-3ac8-49ee-88b1-d0c59b89288b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.998402 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5858ccbc-3ac8-49ee-88b1-d0c59b89288b" (UID: "5858ccbc-3ac8-49ee-88b1-d0c59b89288b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.998430 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-scripts\") pod \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.998555 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-combined-ca-bundle\") pod \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.998589 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-config-data\") pod \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\" (UID: \"5858ccbc-3ac8-49ee-88b1-d0c59b89288b\") " Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.998985 4835 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:05 crc kubenswrapper[4835]: I0216 15:28:05.999000 4835 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.003675 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-scripts" (OuterVolumeSpecName: "scripts") pod "5858ccbc-3ac8-49ee-88b1-d0c59b89288b" (UID: "5858ccbc-3ac8-49ee-88b1-d0c59b89288b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.005423 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-kube-api-access-phgw5" (OuterVolumeSpecName: "kube-api-access-phgw5") pod "5858ccbc-3ac8-49ee-88b1-d0c59b89288b" (UID: "5858ccbc-3ac8-49ee-88b1-d0c59b89288b"). InnerVolumeSpecName "kube-api-access-phgw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.050730 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5858ccbc-3ac8-49ee-88b1-d0c59b89288b" (UID: "5858ccbc-3ac8-49ee-88b1-d0c59b89288b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.085459 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5858ccbc-3ac8-49ee-88b1-d0c59b89288b" (UID: "5858ccbc-3ac8-49ee-88b1-d0c59b89288b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.101939 4835 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.101978 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phgw5\" (UniqueName: \"kubernetes.io/projected/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-kube-api-access-phgw5\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.101993 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.102006 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.113030 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-config-data" (OuterVolumeSpecName: "config-data") pod "5858ccbc-3ac8-49ee-88b1-d0c59b89288b" (UID: "5858ccbc-3ac8-49ee-88b1-d0c59b89288b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.139496 4835 generic.go:334] "Generic (PLEG): container finished" podID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerID="785419cb28c19ed24e4b94f1720c9ddeef4e1ed5a18667890a1be9422b07d7cc" exitCode=0 Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.139732 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5858ccbc-3ac8-49ee-88b1-d0c59b89288b","Type":"ContainerDied","Data":"785419cb28c19ed24e4b94f1720c9ddeef4e1ed5a18667890a1be9422b07d7cc"} Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.139782 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5858ccbc-3ac8-49ee-88b1-d0c59b89288b","Type":"ContainerDied","Data":"5b72842245a5c3f41979fda19469e044bf5634c46b0899eeb9ade1ab898998dd"} Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.139799 4835 scope.go:117] "RemoveContainer" containerID="b327fc0893b5e0d6b20fcd988c7c33e5720e7d2a9c8b2a999fb647972a234083" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.139954 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.166340 4835 scope.go:117] "RemoveContainer" containerID="38678d41dc1f3e1a3ab124aae4191b7f9222c455c99356662b67847a389409cf" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.184822 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.196615 4835 scope.go:117] "RemoveContainer" containerID="785419cb28c19ed24e4b94f1720c9ddeef4e1ed5a18667890a1be9422b07d7cc" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.200052 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.206443 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5858ccbc-3ac8-49ee-88b1-d0c59b89288b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.215500 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.229325 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:28:06 crc kubenswrapper[4835]: E0216 15:28:06.230207 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerName="sg-core" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.232832 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerName="sg-core" Feb 16 15:28:06 crc kubenswrapper[4835]: E0216 15:28:06.232978 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerName="ceilometer-notification-agent" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.233117 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerName="ceilometer-notification-agent" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.231326 4835 scope.go:117] "RemoveContainer" containerID="bf8e6ad7e5cfdd2869ada80d40a449bc30c79aaf8684d432c12dda7efa35dc7d" Feb 16 15:28:06 crc kubenswrapper[4835]: E0216 15:28:06.233286 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerName="ceilometer-central-agent" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.233677 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerName="ceilometer-central-agent" Feb 16 15:28:06 crc kubenswrapper[4835]: E0216 15:28:06.233799 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerName="proxy-httpd" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.233876 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerName="proxy-httpd" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.234861 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerName="sg-core" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.234987 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerName="ceilometer-central-agent" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.235083 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerName="proxy-httpd" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.235252 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" containerName="ceilometer-notification-agent" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.247467 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.247689 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.252714 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.253163 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.255960 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.265428 4835 scope.go:117] "RemoveContainer" containerID="b327fc0893b5e0d6b20fcd988c7c33e5720e7d2a9c8b2a999fb647972a234083" Feb 16 15:28:06 crc kubenswrapper[4835]: E0216 15:28:06.265935 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b327fc0893b5e0d6b20fcd988c7c33e5720e7d2a9c8b2a999fb647972a234083\": container with ID starting with b327fc0893b5e0d6b20fcd988c7c33e5720e7d2a9c8b2a999fb647972a234083 not found: ID does not exist" containerID="b327fc0893b5e0d6b20fcd988c7c33e5720e7d2a9c8b2a999fb647972a234083" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.266037 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b327fc0893b5e0d6b20fcd988c7c33e5720e7d2a9c8b2a999fb647972a234083"} err="failed to get container status \"b327fc0893b5e0d6b20fcd988c7c33e5720e7d2a9c8b2a999fb647972a234083\": rpc error: code = NotFound desc = could not find container \"b327fc0893b5e0d6b20fcd988c7c33e5720e7d2a9c8b2a999fb647972a234083\": container with ID starting with b327fc0893b5e0d6b20fcd988c7c33e5720e7d2a9c8b2a999fb647972a234083 not found: ID does not exist" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.266116 4835 scope.go:117] "RemoveContainer" containerID="38678d41dc1f3e1a3ab124aae4191b7f9222c455c99356662b67847a389409cf" Feb 16 15:28:06 crc kubenswrapper[4835]: E0216 15:28:06.266360 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38678d41dc1f3e1a3ab124aae4191b7f9222c455c99356662b67847a389409cf\": container with ID starting with 38678d41dc1f3e1a3ab124aae4191b7f9222c455c99356662b67847a389409cf not found: ID does not exist" containerID="38678d41dc1f3e1a3ab124aae4191b7f9222c455c99356662b67847a389409cf" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.266450 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38678d41dc1f3e1a3ab124aae4191b7f9222c455c99356662b67847a389409cf"} err="failed to get container status \"38678d41dc1f3e1a3ab124aae4191b7f9222c455c99356662b67847a389409cf\": rpc error: code = NotFound desc = could not find container \"38678d41dc1f3e1a3ab124aae4191b7f9222c455c99356662b67847a389409cf\": container with ID starting with 38678d41dc1f3e1a3ab124aae4191b7f9222c455c99356662b67847a389409cf not found: ID does not exist" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.266521 4835 scope.go:117] "RemoveContainer" containerID="785419cb28c19ed24e4b94f1720c9ddeef4e1ed5a18667890a1be9422b07d7cc" Feb 16 15:28:06 crc kubenswrapper[4835]: E0216 15:28:06.266789 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"785419cb28c19ed24e4b94f1720c9ddeef4e1ed5a18667890a1be9422b07d7cc\": container with ID starting with 785419cb28c19ed24e4b94f1720c9ddeef4e1ed5a18667890a1be9422b07d7cc not found: ID does not exist" containerID="785419cb28c19ed24e4b94f1720c9ddeef4e1ed5a18667890a1be9422b07d7cc" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.266865 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785419cb28c19ed24e4b94f1720c9ddeef4e1ed5a18667890a1be9422b07d7cc"} err="failed to get container status \"785419cb28c19ed24e4b94f1720c9ddeef4e1ed5a18667890a1be9422b07d7cc\": rpc error: code = NotFound desc = could not find container \"785419cb28c19ed24e4b94f1720c9ddeef4e1ed5a18667890a1be9422b07d7cc\": container with ID starting with 785419cb28c19ed24e4b94f1720c9ddeef4e1ed5a18667890a1be9422b07d7cc not found: ID does not exist" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.266948 4835 scope.go:117] "RemoveContainer" containerID="bf8e6ad7e5cfdd2869ada80d40a449bc30c79aaf8684d432c12dda7efa35dc7d" Feb 16 15:28:06 crc kubenswrapper[4835]: E0216 15:28:06.267192 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf8e6ad7e5cfdd2869ada80d40a449bc30c79aaf8684d432c12dda7efa35dc7d\": container with ID starting with bf8e6ad7e5cfdd2869ada80d40a449bc30c79aaf8684d432c12dda7efa35dc7d not found: ID does not exist" containerID="bf8e6ad7e5cfdd2869ada80d40a449bc30c79aaf8684d432c12dda7efa35dc7d" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.267277 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8e6ad7e5cfdd2869ada80d40a449bc30c79aaf8684d432c12dda7efa35dc7d"} err="failed to get container status \"bf8e6ad7e5cfdd2869ada80d40a449bc30c79aaf8684d432c12dda7efa35dc7d\": rpc error: code = NotFound desc = could not find container \"bf8e6ad7e5cfdd2869ada80d40a449bc30c79aaf8684d432c12dda7efa35dc7d\": container with ID starting with bf8e6ad7e5cfdd2869ada80d40a449bc30c79aaf8684d432c12dda7efa35dc7d not found: ID does not exist" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.308457 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-log-httpd\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.308740 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.308869 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96j2f\" (UniqueName: \"kubernetes.io/projected/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-kube-api-access-96j2f\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.309012 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-scripts\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.309121 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-config-data\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.309211 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.309369 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-run-httpd\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.309521 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: E0216 15:28:06.380335 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.411216 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-log-httpd\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.411328 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.411367 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96j2f\" (UniqueName: \"kubernetes.io/projected/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-kube-api-access-96j2f\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.411405 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-scripts\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.411455 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-config-data\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.411472 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.411501 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-run-httpd\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.411550 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.412218 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-log-httpd\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.413400 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-run-httpd\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.415861 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-scripts\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.415864 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.416785 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-config-data\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.417427 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.418285 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.429295 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96j2f\" (UniqueName: \"kubernetes.io/projected/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-kube-api-access-96j2f\") pod \"ceilometer-0\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.582892 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.887875 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9177a8c2-3c0c-4194-8abe-5e7796bf7e64" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:28:06 crc kubenswrapper[4835]: I0216 15:28:06.888137 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9177a8c2-3c0c-4194-8abe-5e7796bf7e64" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:28:07 crc kubenswrapper[4835]: I0216 15:28:07.124757 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:28:07 crc kubenswrapper[4835]: I0216 15:28:07.152737 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fb7f3d6-99cc-41d2-aad4-29c7daa24486","Type":"ContainerStarted","Data":"60f9ba3795b4fe6d1657ca0754a163cfb3464b933394401ff9560ae2339706f2"} Feb 16 15:28:07 crc kubenswrapper[4835]: I0216 15:28:07.390065 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5858ccbc-3ac8-49ee-88b1-d0c59b89288b" path="/var/lib/kubelet/pods/5858ccbc-3ac8-49ee-88b1-d0c59b89288b/volumes" Feb 16 15:28:08 crc kubenswrapper[4835]: I0216 15:28:08.169541 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fb7f3d6-99cc-41d2-aad4-29c7daa24486","Type":"ContainerStarted","Data":"d92281301f473bb0597d63bc445f7380bc1ac40c732b4859c220c8d9d67f7768"} Feb 16 15:28:09 crc kubenswrapper[4835]: I0216 15:28:09.188807 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fb7f3d6-99cc-41d2-aad4-29c7daa24486","Type":"ContainerStarted","Data":"b6179ddf173aad744099689bfe8ec9bce7c9d4c004277f48210c36d81b4147eb"} Feb 16 15:28:10 crc kubenswrapper[4835]: I0216 15:28:10.200365 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fb7f3d6-99cc-41d2-aad4-29c7daa24486","Type":"ContainerStarted","Data":"b7e06ee106f6dbe32e65f8b95fb36ef7296d6648d7e2c0213525b8849bd67a1a"} Feb 16 15:28:10 crc kubenswrapper[4835]: I0216 15:28:10.574728 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 15:28:10 crc kubenswrapper[4835]: I0216 15:28:10.575134 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 15:28:10 crc kubenswrapper[4835]: I0216 15:28:10.583954 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 15:28:10 crc kubenswrapper[4835]: I0216 15:28:10.585645 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 15:28:11 crc kubenswrapper[4835]: I0216 15:28:11.215982 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fb7f3d6-99cc-41d2-aad4-29c7daa24486","Type":"ContainerStarted","Data":"fcf4b71bd91c33731dd117a25a6d0c18bb2f268475b9f4111d985d3688bda9c0"} Feb 16 15:28:11 crc kubenswrapper[4835]: I0216 15:28:11.241399 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9537604229999999 podStartE2EDuration="5.24137807s" podCreationTimestamp="2026-02-16 15:28:06 +0000 UTC" firstStartedPulling="2026-02-16 15:28:07.135111409 +0000 UTC m=+1236.427104294" lastFinishedPulling="2026-02-16 15:28:10.422729046 +0000 UTC m=+1239.714721941" observedRunningTime="2026-02-16 15:28:11.233384942 +0000 UTC m=+1240.525377847" watchObservedRunningTime="2026-02-16 15:28:11.24137807 +0000 UTC m=+1240.533370965" Feb 16 15:28:12 crc kubenswrapper[4835]: I0216 15:28:12.229260 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:28:13 crc kubenswrapper[4835]: I0216 15:28:13.520797 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.186969 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.254921 4835 generic.go:334] "Generic (PLEG): container finished" podID="3ad6fbc1-2208-46a2-8fe7-87aebc51475d" containerID="4c25fa407b6e2911ba45b98d6cbe2c4c98e2f4685826baee6bfa8014de04e935" exitCode=137 Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.254993 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3ad6fbc1-2208-46a2-8fe7-87aebc51475d","Type":"ContainerDied","Data":"4c25fa407b6e2911ba45b98d6cbe2c4c98e2f4685826baee6bfa8014de04e935"} Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.255026 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3ad6fbc1-2208-46a2-8fe7-87aebc51475d","Type":"ContainerDied","Data":"5c5f049aa7b1bef51bea37470d16d4ef0d07df8c1a85a9271631eb32501dcf3c"} Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.255028 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.255069 4835 scope.go:117] "RemoveContainer" containerID="4c25fa407b6e2911ba45b98d6cbe2c4c98e2f4685826baee6bfa8014de04e935" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.275967 4835 scope.go:117] "RemoveContainer" containerID="4c25fa407b6e2911ba45b98d6cbe2c4c98e2f4685826baee6bfa8014de04e935" Feb 16 15:28:14 crc kubenswrapper[4835]: E0216 15:28:14.276564 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c25fa407b6e2911ba45b98d6cbe2c4c98e2f4685826baee6bfa8014de04e935\": container with ID starting with 4c25fa407b6e2911ba45b98d6cbe2c4c98e2f4685826baee6bfa8014de04e935 not found: ID does not exist" containerID="4c25fa407b6e2911ba45b98d6cbe2c4c98e2f4685826baee6bfa8014de04e935" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.276602 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c25fa407b6e2911ba45b98d6cbe2c4c98e2f4685826baee6bfa8014de04e935"} err="failed to get container status \"4c25fa407b6e2911ba45b98d6cbe2c4c98e2f4685826baee6bfa8014de04e935\": rpc error: code = NotFound desc = could not find container \"4c25fa407b6e2911ba45b98d6cbe2c4c98e2f4685826baee6bfa8014de04e935\": container with ID starting with 4c25fa407b6e2911ba45b98d6cbe2c4c98e2f4685826baee6bfa8014de04e935 not found: ID does not exist" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.291934 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-combined-ca-bundle\") pod \"3ad6fbc1-2208-46a2-8fe7-87aebc51475d\" (UID: \"3ad6fbc1-2208-46a2-8fe7-87aebc51475d\") " Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.291987 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9kxt\" (UniqueName: \"kubernetes.io/projected/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-kube-api-access-l9kxt\") pod \"3ad6fbc1-2208-46a2-8fe7-87aebc51475d\" (UID: \"3ad6fbc1-2208-46a2-8fe7-87aebc51475d\") " Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.292132 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-config-data\") pod \"3ad6fbc1-2208-46a2-8fe7-87aebc51475d\" (UID: \"3ad6fbc1-2208-46a2-8fe7-87aebc51475d\") " Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.300357 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-kube-api-access-l9kxt" (OuterVolumeSpecName: "kube-api-access-l9kxt") pod "3ad6fbc1-2208-46a2-8fe7-87aebc51475d" (UID: "3ad6fbc1-2208-46a2-8fe7-87aebc51475d"). InnerVolumeSpecName "kube-api-access-l9kxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.326510 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-config-data" (OuterVolumeSpecName: "config-data") pod "3ad6fbc1-2208-46a2-8fe7-87aebc51475d" (UID: "3ad6fbc1-2208-46a2-8fe7-87aebc51475d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.332717 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ad6fbc1-2208-46a2-8fe7-87aebc51475d" (UID: "3ad6fbc1-2208-46a2-8fe7-87aebc51475d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.394227 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.394275 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.394292 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9kxt\" (UniqueName: \"kubernetes.io/projected/3ad6fbc1-2208-46a2-8fe7-87aebc51475d-kube-api-access-l9kxt\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.646027 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.668297 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.685097 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:28:14 crc kubenswrapper[4835]: E0216 15:28:14.685515 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad6fbc1-2208-46a2-8fe7-87aebc51475d" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.685572 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad6fbc1-2208-46a2-8fe7-87aebc51475d" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.685810 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad6fbc1-2208-46a2-8fe7-87aebc51475d" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.686509 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.692707 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.693334 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.693502 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.697574 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.803290 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c90c22cd-b62e-4d0e-bf45-ba03b2241ba7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c90c22cd-b62e-4d0e-bf45-ba03b2241ba7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.803383 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c90c22cd-b62e-4d0e-bf45-ba03b2241ba7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c90c22cd-b62e-4d0e-bf45-ba03b2241ba7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.803483 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt5p5\" (UniqueName: \"kubernetes.io/projected/c90c22cd-b62e-4d0e-bf45-ba03b2241ba7-kube-api-access-dt5p5\") pod \"nova-cell1-novncproxy-0\" (UID: \"c90c22cd-b62e-4d0e-bf45-ba03b2241ba7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.803621 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c90c22cd-b62e-4d0e-bf45-ba03b2241ba7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c90c22cd-b62e-4d0e-bf45-ba03b2241ba7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.803698 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c90c22cd-b62e-4d0e-bf45-ba03b2241ba7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c90c22cd-b62e-4d0e-bf45-ba03b2241ba7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.905420 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c90c22cd-b62e-4d0e-bf45-ba03b2241ba7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c90c22cd-b62e-4d0e-bf45-ba03b2241ba7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.905643 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c90c22cd-b62e-4d0e-bf45-ba03b2241ba7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c90c22cd-b62e-4d0e-bf45-ba03b2241ba7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.905811 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c90c22cd-b62e-4d0e-bf45-ba03b2241ba7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c90c22cd-b62e-4d0e-bf45-ba03b2241ba7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.906425 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c90c22cd-b62e-4d0e-bf45-ba03b2241ba7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c90c22cd-b62e-4d0e-bf45-ba03b2241ba7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.906615 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5p5\" (UniqueName: \"kubernetes.io/projected/c90c22cd-b62e-4d0e-bf45-ba03b2241ba7-kube-api-access-dt5p5\") pod \"nova-cell1-novncproxy-0\" (UID: \"c90c22cd-b62e-4d0e-bf45-ba03b2241ba7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.913402 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c90c22cd-b62e-4d0e-bf45-ba03b2241ba7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c90c22cd-b62e-4d0e-bf45-ba03b2241ba7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.915542 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c90c22cd-b62e-4d0e-bf45-ba03b2241ba7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c90c22cd-b62e-4d0e-bf45-ba03b2241ba7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.922480 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt5p5\" (UniqueName: \"kubernetes.io/projected/c90c22cd-b62e-4d0e-bf45-ba03b2241ba7-kube-api-access-dt5p5\") pod \"nova-cell1-novncproxy-0\" (UID: \"c90c22cd-b62e-4d0e-bf45-ba03b2241ba7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.929303 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c90c22cd-b62e-4d0e-bf45-ba03b2241ba7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c90c22cd-b62e-4d0e-bf45-ba03b2241ba7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:14 crc kubenswrapper[4835]: I0216 15:28:14.929456 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c90c22cd-b62e-4d0e-bf45-ba03b2241ba7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c90c22cd-b62e-4d0e-bf45-ba03b2241ba7\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:15 crc kubenswrapper[4835]: I0216 15:28:15.017055 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:15 crc kubenswrapper[4835]: I0216 15:28:15.411982 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ad6fbc1-2208-46a2-8fe7-87aebc51475d" path="/var/lib/kubelet/pods/3ad6fbc1-2208-46a2-8fe7-87aebc51475d/volumes" Feb 16 15:28:15 crc kubenswrapper[4835]: I0216 15:28:15.515933 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:28:15 crc kubenswrapper[4835]: I0216 15:28:15.810387 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 15:28:15 crc kubenswrapper[4835]: I0216 15:28:15.811651 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 15:28:15 crc kubenswrapper[4835]: I0216 15:28:15.813403 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 15:28:15 crc kubenswrapper[4835]: I0216 15:28:15.815250 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.277017 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c90c22cd-b62e-4d0e-bf45-ba03b2241ba7","Type":"ContainerStarted","Data":"d57d1ea001bf626fad209e1298102ceaf5b0953d9c6c0c85287443693fc03719"} Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.277084 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c90c22cd-b62e-4d0e-bf45-ba03b2241ba7","Type":"ContainerStarted","Data":"3aeff1270aa7dfb8455e2ed15b820ddbf70169fb0ea1d4552164a109c3c8b2f3"} Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.277232 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.292202 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.305243 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.305224983 podStartE2EDuration="2.305224983s" podCreationTimestamp="2026-02-16 15:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:28:16.297751459 +0000 UTC m=+1245.589744354" watchObservedRunningTime="2026-02-16 15:28:16.305224983 +0000 UTC m=+1245.597217878" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.472931 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-c8sbw"] Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.475237 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.484340 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-c8sbw"] Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.647303 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04860880-2e34-4a63-b26b-e1bb6163560f-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.647491 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04860880-2e34-4a63-b26b-e1bb6163560f-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.647590 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6pk8\" (UniqueName: \"kubernetes.io/projected/04860880-2e34-4a63-b26b-e1bb6163560f-kube-api-access-s6pk8\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.647708 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04860880-2e34-4a63-b26b-e1bb6163560f-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.647759 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04860880-2e34-4a63-b26b-e1bb6163560f-config\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.647927 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04860880-2e34-4a63-b26b-e1bb6163560f-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.750254 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04860880-2e34-4a63-b26b-e1bb6163560f-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.750387 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04860880-2e34-4a63-b26b-e1bb6163560f-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.750422 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6pk8\" (UniqueName: \"kubernetes.io/projected/04860880-2e34-4a63-b26b-e1bb6163560f-kube-api-access-s6pk8\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.750449 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04860880-2e34-4a63-b26b-e1bb6163560f-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.750473 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04860880-2e34-4a63-b26b-e1bb6163560f-config\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.750554 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04860880-2e34-4a63-b26b-e1bb6163560f-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.751416 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04860880-2e34-4a63-b26b-e1bb6163560f-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.751978 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04860880-2e34-4a63-b26b-e1bb6163560f-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.752518 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04860880-2e34-4a63-b26b-e1bb6163560f-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.753376 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04860880-2e34-4a63-b26b-e1bb6163560f-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.753911 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04860880-2e34-4a63-b26b-e1bb6163560f-config\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.773488 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6pk8\" (UniqueName: \"kubernetes.io/projected/04860880-2e34-4a63-b26b-e1bb6163560f-kube-api-access-s6pk8\") pod \"dnsmasq-dns-89c5cd4d5-c8sbw\" (UID: \"04860880-2e34-4a63-b26b-e1bb6163560f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:16 crc kubenswrapper[4835]: I0216 15:28:16.820329 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:17 crc kubenswrapper[4835]: I0216 15:28:17.311375 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-c8sbw"] Feb 16 15:28:18 crc kubenswrapper[4835]: I0216 15:28:18.300157 4835 generic.go:334] "Generic (PLEG): container finished" podID="04860880-2e34-4a63-b26b-e1bb6163560f" containerID="fc33855804795d2eff551c8398bce913d1e429bf1cdf4c05f9bf5209bb501115" exitCode=0 Feb 16 15:28:18 crc kubenswrapper[4835]: I0216 15:28:18.300442 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" event={"ID":"04860880-2e34-4a63-b26b-e1bb6163560f","Type":"ContainerDied","Data":"fc33855804795d2eff551c8398bce913d1e429bf1cdf4c05f9bf5209bb501115"} Feb 16 15:28:18 crc kubenswrapper[4835]: I0216 15:28:18.300627 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" event={"ID":"04860880-2e34-4a63-b26b-e1bb6163560f","Type":"ContainerStarted","Data":"428c9d1cc12859093990f7d4885da2e5ad76cff3200440157e4f43f01a774883"} Feb 16 15:28:18 crc kubenswrapper[4835]: I0216 15:28:18.882743 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:28:18 crc kubenswrapper[4835]: I0216 15:28:18.893709 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:28:18 crc kubenswrapper[4835]: I0216 15:28:18.894218 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerName="ceilometer-central-agent" containerID="cri-o://d92281301f473bb0597d63bc445f7380bc1ac40c732b4859c220c8d9d67f7768" gracePeriod=30 Feb 16 15:28:18 crc kubenswrapper[4835]: I0216 15:28:18.894342 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerName="ceilometer-notification-agent" containerID="cri-o://b6179ddf173aad744099689bfe8ec9bce7c9d4c004277f48210c36d81b4147eb" gracePeriod=30 Feb 16 15:28:18 crc kubenswrapper[4835]: I0216 15:28:18.894357 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerName="proxy-httpd" containerID="cri-o://fcf4b71bd91c33731dd117a25a6d0c18bb2f268475b9f4111d985d3688bda9c0" gracePeriod=30 Feb 16 15:28:18 crc kubenswrapper[4835]: I0216 15:28:18.894340 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerName="sg-core" containerID="cri-o://b7e06ee106f6dbe32e65f8b95fb36ef7296d6648d7e2c0213525b8849bd67a1a" gracePeriod=30 Feb 16 15:28:19 crc kubenswrapper[4835]: I0216 15:28:19.318744 4835 generic.go:334] "Generic (PLEG): container finished" podID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerID="fcf4b71bd91c33731dd117a25a6d0c18bb2f268475b9f4111d985d3688bda9c0" exitCode=0 Feb 16 15:28:19 crc kubenswrapper[4835]: I0216 15:28:19.319723 4835 generic.go:334] "Generic (PLEG): container finished" podID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerID="b7e06ee106f6dbe32e65f8b95fb36ef7296d6648d7e2c0213525b8849bd67a1a" exitCode=2 Feb 16 15:28:19 crc kubenswrapper[4835]: I0216 15:28:19.318832 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fb7f3d6-99cc-41d2-aad4-29c7daa24486","Type":"ContainerDied","Data":"fcf4b71bd91c33731dd117a25a6d0c18bb2f268475b9f4111d985d3688bda9c0"} Feb 16 15:28:19 crc kubenswrapper[4835]: I0216 15:28:19.319960 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fb7f3d6-99cc-41d2-aad4-29c7daa24486","Type":"ContainerDied","Data":"b7e06ee106f6dbe32e65f8b95fb36ef7296d6648d7e2c0213525b8849bd67a1a"} Feb 16 15:28:19 crc kubenswrapper[4835]: I0216 15:28:19.323090 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" event={"ID":"04860880-2e34-4a63-b26b-e1bb6163560f","Type":"ContainerStarted","Data":"41b74b21da112892007485abcce2eec0d8b6d1bab98e03227ef80f44f6817e51"} Feb 16 15:28:19 crc kubenswrapper[4835]: I0216 15:28:19.323153 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:19 crc kubenswrapper[4835]: I0216 15:28:19.323652 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9177a8c2-3c0c-4194-8abe-5e7796bf7e64" containerName="nova-api-log" containerID="cri-o://049fd24aa8c50e83e0a60b20790a8f7725481d8caf00413432237ce38cfdba47" gracePeriod=30 Feb 16 15:28:19 crc kubenswrapper[4835]: I0216 15:28:19.323779 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9177a8c2-3c0c-4194-8abe-5e7796bf7e64" containerName="nova-api-api" containerID="cri-o://2a72be23b5f358e45a960403cee9bf4150678de26b59a664d676e2b69e5ad9f7" gracePeriod=30 Feb 16 15:28:19 crc kubenswrapper[4835]: I0216 15:28:19.350362 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" podStartSLOduration=3.350344406 podStartE2EDuration="3.350344406s" podCreationTimestamp="2026-02-16 15:28:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:28:19.346705132 +0000 UTC m=+1248.638698027" watchObservedRunningTime="2026-02-16 15:28:19.350344406 +0000 UTC m=+1248.642337301" Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.018014 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.340291 4835 generic.go:334] "Generic (PLEG): container finished" podID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerID="b6179ddf173aad744099689bfe8ec9bce7c9d4c004277f48210c36d81b4147eb" exitCode=0 Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.340326 4835 generic.go:334] "Generic (PLEG): container finished" podID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerID="d92281301f473bb0597d63bc445f7380bc1ac40c732b4859c220c8d9d67f7768" exitCode=0 Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.340361 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fb7f3d6-99cc-41d2-aad4-29c7daa24486","Type":"ContainerDied","Data":"b6179ddf173aad744099689bfe8ec9bce7c9d4c004277f48210c36d81b4147eb"} Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.340407 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fb7f3d6-99cc-41d2-aad4-29c7daa24486","Type":"ContainerDied","Data":"d92281301f473bb0597d63bc445f7380bc1ac40c732b4859c220c8d9d67f7768"} Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.342769 4835 generic.go:334] "Generic (PLEG): container finished" podID="9177a8c2-3c0c-4194-8abe-5e7796bf7e64" containerID="049fd24aa8c50e83e0a60b20790a8f7725481d8caf00413432237ce38cfdba47" exitCode=143 Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.342797 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9177a8c2-3c0c-4194-8abe-5e7796bf7e64","Type":"ContainerDied","Data":"049fd24aa8c50e83e0a60b20790a8f7725481d8caf00413432237ce38cfdba47"} Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.655117 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.840215 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-run-httpd\") pod \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.840302 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-combined-ca-bundle\") pod \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.840348 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96j2f\" (UniqueName: \"kubernetes.io/projected/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-kube-api-access-96j2f\") pod \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.840611 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-scripts\") pod \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.840638 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-ceilometer-tls-certs\") pod \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.840727 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-log-httpd\") pod \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.840753 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-sg-core-conf-yaml\") pod \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.840788 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-config-data\") pod \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\" (UID: \"6fb7f3d6-99cc-41d2-aad4-29c7daa24486\") " Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.841156 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6fb7f3d6-99cc-41d2-aad4-29c7daa24486" (UID: "6fb7f3d6-99cc-41d2-aad4-29c7daa24486"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.841613 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6fb7f3d6-99cc-41d2-aad4-29c7daa24486" (UID: "6fb7f3d6-99cc-41d2-aad4-29c7daa24486"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.849297 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-kube-api-access-96j2f" (OuterVolumeSpecName: "kube-api-access-96j2f") pod "6fb7f3d6-99cc-41d2-aad4-29c7daa24486" (UID: "6fb7f3d6-99cc-41d2-aad4-29c7daa24486"). InnerVolumeSpecName "kube-api-access-96j2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.850098 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-scripts" (OuterVolumeSpecName: "scripts") pod "6fb7f3d6-99cc-41d2-aad4-29c7daa24486" (UID: "6fb7f3d6-99cc-41d2-aad4-29c7daa24486"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.893438 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6fb7f3d6-99cc-41d2-aad4-29c7daa24486" (UID: "6fb7f3d6-99cc-41d2-aad4-29c7daa24486"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.901011 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6fb7f3d6-99cc-41d2-aad4-29c7daa24486" (UID: "6fb7f3d6-99cc-41d2-aad4-29c7daa24486"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.937894 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fb7f3d6-99cc-41d2-aad4-29c7daa24486" (UID: "6fb7f3d6-99cc-41d2-aad4-29c7daa24486"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.943373 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.943415 4835 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.943429 4835 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.943442 4835 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.943450 4835 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.943459 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.943468 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96j2f\" (UniqueName: \"kubernetes.io/projected/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-kube-api-access-96j2f\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:20 crc kubenswrapper[4835]: I0216 15:28:20.974388 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-config-data" (OuterVolumeSpecName: "config-data") pod "6fb7f3d6-99cc-41d2-aad4-29c7daa24486" (UID: "6fb7f3d6-99cc-41d2-aad4-29c7daa24486"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.046011 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb7f3d6-99cc-41d2-aad4-29c7daa24486-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.370800 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6fb7f3d6-99cc-41d2-aad4-29c7daa24486","Type":"ContainerDied","Data":"60f9ba3795b4fe6d1657ca0754a163cfb3464b933394401ff9560ae2339706f2"} Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.371167 4835 scope.go:117] "RemoveContainer" containerID="fcf4b71bd91c33731dd117a25a6d0c18bb2f268475b9f4111d985d3688bda9c0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.370898 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.405563 4835 scope.go:117] "RemoveContainer" containerID="b7e06ee106f6dbe32e65f8b95fb36ef7296d6648d7e2c0213525b8849bd67a1a" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.447500 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.452655 4835 scope.go:117] "RemoveContainer" containerID="b6179ddf173aad744099689bfe8ec9bce7c9d4c004277f48210c36d81b4147eb" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.471034 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.488232 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:28:21 crc kubenswrapper[4835]: E0216 15:28:21.489290 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerName="proxy-httpd" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.489320 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerName="proxy-httpd" Feb 16 15:28:21 crc kubenswrapper[4835]: E0216 15:28:21.489352 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerName="ceilometer-notification-agent" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.489362 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerName="ceilometer-notification-agent" Feb 16 15:28:21 crc kubenswrapper[4835]: E0216 15:28:21.489377 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerName="sg-core" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.489384 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerName="sg-core" Feb 16 15:28:21 crc kubenswrapper[4835]: E0216 15:28:21.489410 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerName="ceilometer-central-agent" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.489421 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerName="ceilometer-central-agent" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.489720 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerName="proxy-httpd" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.489753 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerName="sg-core" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.489783 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerName="ceilometer-notification-agent" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.489824 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" containerName="ceilometer-central-agent" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.493046 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.495525 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.495666 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.495904 4835 scope.go:117] "RemoveContainer" containerID="d92281301f473bb0597d63bc445f7380bc1ac40c732b4859c220c8d9d67f7768" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.495995 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.502275 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:28:21 crc kubenswrapper[4835]: E0216 15:28:21.539242 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:28:21 crc kubenswrapper[4835]: E0216 15:28:21.539300 4835 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:28:21 crc kubenswrapper[4835]: E0216 15:28:21.539422 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqdtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-sgzmb_openstack(3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:28:21 crc kubenswrapper[4835]: E0216 15:28:21.540732 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.666048 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b51fc856-a532-470f-aa5e-349bc749062b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.666139 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b51fc856-a532-470f-aa5e-349bc749062b-run-httpd\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.666199 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg5q2\" (UniqueName: \"kubernetes.io/projected/b51fc856-a532-470f-aa5e-349bc749062b-kube-api-access-vg5q2\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.666233 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b51fc856-a532-470f-aa5e-349bc749062b-scripts\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.666257 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b51fc856-a532-470f-aa5e-349bc749062b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.666278 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51fc856-a532-470f-aa5e-349bc749062b-config-data\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.666299 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51fc856-a532-470f-aa5e-349bc749062b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.666317 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b51fc856-a532-470f-aa5e-349bc749062b-log-httpd\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.767782 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg5q2\" (UniqueName: \"kubernetes.io/projected/b51fc856-a532-470f-aa5e-349bc749062b-kube-api-access-vg5q2\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.767866 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b51fc856-a532-470f-aa5e-349bc749062b-scripts\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.767897 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b51fc856-a532-470f-aa5e-349bc749062b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.767923 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51fc856-a532-470f-aa5e-349bc749062b-config-data\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.767946 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51fc856-a532-470f-aa5e-349bc749062b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.767962 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b51fc856-a532-470f-aa5e-349bc749062b-log-httpd\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.768080 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b51fc856-a532-470f-aa5e-349bc749062b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.768139 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b51fc856-a532-470f-aa5e-349bc749062b-run-httpd\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.768607 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b51fc856-a532-470f-aa5e-349bc749062b-run-httpd\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.768632 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b51fc856-a532-470f-aa5e-349bc749062b-log-httpd\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.775888 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b51fc856-a532-470f-aa5e-349bc749062b-scripts\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.776012 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51fc856-a532-470f-aa5e-349bc749062b-config-data\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.776811 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51fc856-a532-470f-aa5e-349bc749062b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.786821 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b51fc856-a532-470f-aa5e-349bc749062b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.788705 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b51fc856-a532-470f-aa5e-349bc749062b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.791105 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg5q2\" (UniqueName: \"kubernetes.io/projected/b51fc856-a532-470f-aa5e-349bc749062b-kube-api-access-vg5q2\") pod \"ceilometer-0\" (UID: \"b51fc856-a532-470f-aa5e-349bc749062b\") " pod="openstack/ceilometer-0" Feb 16 15:28:21 crc kubenswrapper[4835]: I0216 15:28:21.815842 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:28:22 crc kubenswrapper[4835]: I0216 15:28:22.363506 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:28:22 crc kubenswrapper[4835]: I0216 15:28:22.401931 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b51fc856-a532-470f-aa5e-349bc749062b","Type":"ContainerStarted","Data":"4f773b306dd8083457da3ddc7d978135763e3930888c2a7b10cc56447e1607b1"} Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.089844 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.201212 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-combined-ca-bundle\") pod \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\" (UID: \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\") " Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.201267 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-config-data\") pod \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\" (UID: \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\") " Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.201297 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cff\" (UniqueName: \"kubernetes.io/projected/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-kube-api-access-d7cff\") pod \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\" (UID: \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\") " Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.201347 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-logs\") pod \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\" (UID: \"9177a8c2-3c0c-4194-8abe-5e7796bf7e64\") " Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.202004 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-logs" (OuterVolumeSpecName: "logs") pod "9177a8c2-3c0c-4194-8abe-5e7796bf7e64" (UID: "9177a8c2-3c0c-4194-8abe-5e7796bf7e64"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.202281 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.211715 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-kube-api-access-d7cff" (OuterVolumeSpecName: "kube-api-access-d7cff") pod "9177a8c2-3c0c-4194-8abe-5e7796bf7e64" (UID: "9177a8c2-3c0c-4194-8abe-5e7796bf7e64"). InnerVolumeSpecName "kube-api-access-d7cff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.237901 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-config-data" (OuterVolumeSpecName: "config-data") pod "9177a8c2-3c0c-4194-8abe-5e7796bf7e64" (UID: "9177a8c2-3c0c-4194-8abe-5e7796bf7e64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.250175 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9177a8c2-3c0c-4194-8abe-5e7796bf7e64" (UID: "9177a8c2-3c0c-4194-8abe-5e7796bf7e64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.304683 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.304716 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.304725 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7cff\" (UniqueName: \"kubernetes.io/projected/9177a8c2-3c0c-4194-8abe-5e7796bf7e64-kube-api-access-d7cff\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.390452 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb7f3d6-99cc-41d2-aad4-29c7daa24486" path="/var/lib/kubelet/pods/6fb7f3d6-99cc-41d2-aad4-29c7daa24486/volumes" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.413112 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b51fc856-a532-470f-aa5e-349bc749062b","Type":"ContainerStarted","Data":"3a5fdf96a83eaaae79df60b7c7cdace4935a31ef0c796c8ce1e537b17ef0a5cd"} Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.415496 4835 generic.go:334] "Generic (PLEG): container finished" podID="9177a8c2-3c0c-4194-8abe-5e7796bf7e64" containerID="2a72be23b5f358e45a960403cee9bf4150678de26b59a664d676e2b69e5ad9f7" exitCode=0 Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.415625 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9177a8c2-3c0c-4194-8abe-5e7796bf7e64","Type":"ContainerDied","Data":"2a72be23b5f358e45a960403cee9bf4150678de26b59a664d676e2b69e5ad9f7"} Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.415679 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.415689 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9177a8c2-3c0c-4194-8abe-5e7796bf7e64","Type":"ContainerDied","Data":"ebbad9c532ef1536692a01b4d41634d79db881ec5003f7d0a9ebd51b4bd43f54"} Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.415724 4835 scope.go:117] "RemoveContainer" containerID="2a72be23b5f358e45a960403cee9bf4150678de26b59a664d676e2b69e5ad9f7" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.442312 4835 scope.go:117] "RemoveContainer" containerID="049fd24aa8c50e83e0a60b20790a8f7725481d8caf00413432237ce38cfdba47" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.446153 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.455553 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.471125 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.471589 4835 scope.go:117] "RemoveContainer" containerID="2a72be23b5f358e45a960403cee9bf4150678de26b59a664d676e2b69e5ad9f7" Feb 16 15:28:23 crc kubenswrapper[4835]: E0216 15:28:23.471761 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9177a8c2-3c0c-4194-8abe-5e7796bf7e64" containerName="nova-api-log" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.471778 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9177a8c2-3c0c-4194-8abe-5e7796bf7e64" containerName="nova-api-log" Feb 16 15:28:23 crc kubenswrapper[4835]: E0216 15:28:23.471803 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9177a8c2-3c0c-4194-8abe-5e7796bf7e64" containerName="nova-api-api" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.471810 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9177a8c2-3c0c-4194-8abe-5e7796bf7e64" containerName="nova-api-api" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.472266 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9177a8c2-3c0c-4194-8abe-5e7796bf7e64" containerName="nova-api-api" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.472288 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9177a8c2-3c0c-4194-8abe-5e7796bf7e64" containerName="nova-api-log" Feb 16 15:28:23 crc kubenswrapper[4835]: E0216 15:28:23.473260 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a72be23b5f358e45a960403cee9bf4150678de26b59a664d676e2b69e5ad9f7\": container with ID starting with 2a72be23b5f358e45a960403cee9bf4150678de26b59a664d676e2b69e5ad9f7 not found: ID does not exist" containerID="2a72be23b5f358e45a960403cee9bf4150678de26b59a664d676e2b69e5ad9f7" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.473302 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a72be23b5f358e45a960403cee9bf4150678de26b59a664d676e2b69e5ad9f7"} err="failed to get container status \"2a72be23b5f358e45a960403cee9bf4150678de26b59a664d676e2b69e5ad9f7\": rpc error: code = NotFound desc = could not find container \"2a72be23b5f358e45a960403cee9bf4150678de26b59a664d676e2b69e5ad9f7\": container with ID starting with 2a72be23b5f358e45a960403cee9bf4150678de26b59a664d676e2b69e5ad9f7 not found: ID does not exist" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.473328 4835 scope.go:117] "RemoveContainer" containerID="049fd24aa8c50e83e0a60b20790a8f7725481d8caf00413432237ce38cfdba47" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.473336 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: E0216 15:28:23.474008 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"049fd24aa8c50e83e0a60b20790a8f7725481d8caf00413432237ce38cfdba47\": container with ID starting with 049fd24aa8c50e83e0a60b20790a8f7725481d8caf00413432237ce38cfdba47 not found: ID does not exist" containerID="049fd24aa8c50e83e0a60b20790a8f7725481d8caf00413432237ce38cfdba47" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.474032 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"049fd24aa8c50e83e0a60b20790a8f7725481d8caf00413432237ce38cfdba47"} err="failed to get container status \"049fd24aa8c50e83e0a60b20790a8f7725481d8caf00413432237ce38cfdba47\": rpc error: code = NotFound desc = could not find container \"049fd24aa8c50e83e0a60b20790a8f7725481d8caf00413432237ce38cfdba47\": container with ID starting with 049fd24aa8c50e83e0a60b20790a8f7725481d8caf00413432237ce38cfdba47 not found: ID does not exist" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.475706 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.478813 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.479681 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.490898 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.610310 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.610391 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-config-data\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.610436 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-public-tls-certs\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.610477 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bzwp\" (UniqueName: \"kubernetes.io/projected/f0238764-cfcf-4828-8c5c-3f1e6d31a222-kube-api-access-6bzwp\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.610518 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0238764-cfcf-4828-8c5c-3f1e6d31a222-logs\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.610559 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.712707 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-public-tls-certs\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.712773 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bzwp\" (UniqueName: \"kubernetes.io/projected/f0238764-cfcf-4828-8c5c-3f1e6d31a222-kube-api-access-6bzwp\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.712816 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0238764-cfcf-4828-8c5c-3f1e6d31a222-logs\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.712836 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.712948 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.712980 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-config-data\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.716204 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0238764-cfcf-4828-8c5c-3f1e6d31a222-logs\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.717412 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-config-data\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.720610 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.720921 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-public-tls-certs\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.726189 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.744482 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bzwp\" (UniqueName: \"kubernetes.io/projected/f0238764-cfcf-4828-8c5c-3f1e6d31a222-kube-api-access-6bzwp\") pod \"nova-api-0\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " pod="openstack/nova-api-0" Feb 16 15:28:23 crc kubenswrapper[4835]: I0216 15:28:23.792818 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:28:24 crc kubenswrapper[4835]: I0216 15:28:24.426322 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:28:24 crc kubenswrapper[4835]: I0216 15:28:24.489784 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0238764-cfcf-4828-8c5c-3f1e6d31a222","Type":"ContainerStarted","Data":"55c74c7dc62d8e2941cecc3f5c9281b0397cd3357b29fe6a277b5b082d545347"} Feb 16 15:28:24 crc kubenswrapper[4835]: I0216 15:28:24.491220 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b51fc856-a532-470f-aa5e-349bc749062b","Type":"ContainerStarted","Data":"f38f449cd9b9f4fe1b0a8c889560307901a0cbf36f7f73e8a8fbb99e32c5821d"} Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.017262 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.038401 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.396669 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9177a8c2-3c0c-4194-8abe-5e7796bf7e64" path="/var/lib/kubelet/pods/9177a8c2-3c0c-4194-8abe-5e7796bf7e64/volumes" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.500460 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0238764-cfcf-4828-8c5c-3f1e6d31a222","Type":"ContainerStarted","Data":"88618b7fef8d799957afd0ebf25ad781d7bf8da42a591eb1fd282731c176417d"} Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.500502 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0238764-cfcf-4828-8c5c-3f1e6d31a222","Type":"ContainerStarted","Data":"95646521532be8da4580621f1a1eceb6a8eb04bd44718413fcd34c4cb0e91fa2"} Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.504306 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b51fc856-a532-470f-aa5e-349bc749062b","Type":"ContainerStarted","Data":"6716cd5b1e68a919d424ef67e83d9225f013bfcf71973944aab7067a793a42d6"} Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.526758 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.535403 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.535387921 podStartE2EDuration="2.535387921s" podCreationTimestamp="2026-02-16 15:28:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:28:25.527016843 +0000 UTC m=+1254.819009738" watchObservedRunningTime="2026-02-16 15:28:25.535387921 +0000 UTC m=+1254.827380816" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.699279 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-bc6t7"] Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.701232 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bc6t7" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.705834 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.706069 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.730064 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bc6t7"] Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.768683 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj789\" (UniqueName: \"kubernetes.io/projected/ad3513d5-d2c9-48aa-8264-a7728591bf53-kube-api-access-zj789\") pod \"nova-cell1-cell-mapping-bc6t7\" (UID: \"ad3513d5-d2c9-48aa-8264-a7728591bf53\") " pod="openstack/nova-cell1-cell-mapping-bc6t7" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.768726 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-scripts\") pod \"nova-cell1-cell-mapping-bc6t7\" (UID: \"ad3513d5-d2c9-48aa-8264-a7728591bf53\") " pod="openstack/nova-cell1-cell-mapping-bc6t7" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.768768 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bc6t7\" (UID: \"ad3513d5-d2c9-48aa-8264-a7728591bf53\") " pod="openstack/nova-cell1-cell-mapping-bc6t7" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.768867 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-config-data\") pod \"nova-cell1-cell-mapping-bc6t7\" (UID: \"ad3513d5-d2c9-48aa-8264-a7728591bf53\") " pod="openstack/nova-cell1-cell-mapping-bc6t7" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.870886 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj789\" (UniqueName: \"kubernetes.io/projected/ad3513d5-d2c9-48aa-8264-a7728591bf53-kube-api-access-zj789\") pod \"nova-cell1-cell-mapping-bc6t7\" (UID: \"ad3513d5-d2c9-48aa-8264-a7728591bf53\") " pod="openstack/nova-cell1-cell-mapping-bc6t7" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.870929 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-scripts\") pod \"nova-cell1-cell-mapping-bc6t7\" (UID: \"ad3513d5-d2c9-48aa-8264-a7728591bf53\") " pod="openstack/nova-cell1-cell-mapping-bc6t7" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.870975 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bc6t7\" (UID: \"ad3513d5-d2c9-48aa-8264-a7728591bf53\") " pod="openstack/nova-cell1-cell-mapping-bc6t7" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.871074 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-config-data\") pod \"nova-cell1-cell-mapping-bc6t7\" (UID: \"ad3513d5-d2c9-48aa-8264-a7728591bf53\") " pod="openstack/nova-cell1-cell-mapping-bc6t7" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.885172 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-scripts\") pod \"nova-cell1-cell-mapping-bc6t7\" (UID: \"ad3513d5-d2c9-48aa-8264-a7728591bf53\") " pod="openstack/nova-cell1-cell-mapping-bc6t7" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.885419 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bc6t7\" (UID: \"ad3513d5-d2c9-48aa-8264-a7728591bf53\") " pod="openstack/nova-cell1-cell-mapping-bc6t7" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.885600 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-config-data\") pod \"nova-cell1-cell-mapping-bc6t7\" (UID: \"ad3513d5-d2c9-48aa-8264-a7728591bf53\") " pod="openstack/nova-cell1-cell-mapping-bc6t7" Feb 16 15:28:25 crc kubenswrapper[4835]: I0216 15:28:25.887730 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj789\" (UniqueName: \"kubernetes.io/projected/ad3513d5-d2c9-48aa-8264-a7728591bf53-kube-api-access-zj789\") pod \"nova-cell1-cell-mapping-bc6t7\" (UID: \"ad3513d5-d2c9-48aa-8264-a7728591bf53\") " pod="openstack/nova-cell1-cell-mapping-bc6t7" Feb 16 15:28:26 crc kubenswrapper[4835]: I0216 15:28:26.052279 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bc6t7" Feb 16 15:28:26 crc kubenswrapper[4835]: I0216 15:28:26.547105 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bc6t7"] Feb 16 15:28:26 crc kubenswrapper[4835]: W0216 15:28:26.550330 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad3513d5_d2c9_48aa_8264_a7728591bf53.slice/crio-ad062b819be80e4f130d9c204977fd3290e6cd2ea30d97692b4aff29d4012833 WatchSource:0}: Error finding container ad062b819be80e4f130d9c204977fd3290e6cd2ea30d97692b4aff29d4012833: Status 404 returned error can't find the container with id ad062b819be80e4f130d9c204977fd3290e6cd2ea30d97692b4aff29d4012833 Feb 16 15:28:26 crc kubenswrapper[4835]: I0216 15:28:26.827648 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-c8sbw" Feb 16 15:28:26 crc kubenswrapper[4835]: I0216 15:28:26.918922 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-slzml"] Feb 16 15:28:26 crc kubenswrapper[4835]: I0216 15:28:26.919127 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-slzml" podUID="1be0de54-02e9-4cfa-9fef-b5e5c00bd572" containerName="dnsmasq-dns" containerID="cri-o://f8af5327e4adae2fb872968e4194451156796e6686e87282eb5d68372b253c6f" gracePeriod=10 Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.485672 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.543279 4835 generic.go:334] "Generic (PLEG): container finished" podID="1be0de54-02e9-4cfa-9fef-b5e5c00bd572" containerID="f8af5327e4adae2fb872968e4194451156796e6686e87282eb5d68372b253c6f" exitCode=0 Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.543370 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-slzml" event={"ID":"1be0de54-02e9-4cfa-9fef-b5e5c00bd572","Type":"ContainerDied","Data":"f8af5327e4adae2fb872968e4194451156796e6686e87282eb5d68372b253c6f"} Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.543402 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-slzml" event={"ID":"1be0de54-02e9-4cfa-9fef-b5e5c00bd572","Type":"ContainerDied","Data":"2c26d3a68ded3ec4beb1c88520f6b0616608128096c60667f4cd51015ca1e434"} Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.543426 4835 scope.go:117] "RemoveContainer" containerID="f8af5327e4adae2fb872968e4194451156796e6686e87282eb5d68372b253c6f" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.543698 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-slzml" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.553201 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bc6t7" event={"ID":"ad3513d5-d2c9-48aa-8264-a7728591bf53","Type":"ContainerStarted","Data":"b7a4d0123bb745b3d3a98a494901c671a07757f07dc33822953be3ea70cc95ee"} Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.553237 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bc6t7" event={"ID":"ad3513d5-d2c9-48aa-8264-a7728591bf53","Type":"ContainerStarted","Data":"ad062b819be80e4f130d9c204977fd3290e6cd2ea30d97692b4aff29d4012833"} Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.604727 4835 scope.go:117] "RemoveContainer" containerID="ea9eb25ea57ec59487089697cd0a93c6a354062e1d49d61292dba5c1b1ae5322" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.617665 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8csxp\" (UniqueName: \"kubernetes.io/projected/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-kube-api-access-8csxp\") pod \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.617719 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-config\") pod \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.617741 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-dns-svc\") pod \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.617816 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-ovsdbserver-nb\") pod \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.617987 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-dns-swift-storage-0\") pod \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.618073 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-ovsdbserver-sb\") pod \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\" (UID: \"1be0de54-02e9-4cfa-9fef-b5e5c00bd572\") " Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.627099 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-kube-api-access-8csxp" (OuterVolumeSpecName: "kube-api-access-8csxp") pod "1be0de54-02e9-4cfa-9fef-b5e5c00bd572" (UID: "1be0de54-02e9-4cfa-9fef-b5e5c00bd572"). InnerVolumeSpecName "kube-api-access-8csxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.656119 4835 scope.go:117] "RemoveContainer" containerID="f8af5327e4adae2fb872968e4194451156796e6686e87282eb5d68372b253c6f" Feb 16 15:28:27 crc kubenswrapper[4835]: E0216 15:28:27.663861 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8af5327e4adae2fb872968e4194451156796e6686e87282eb5d68372b253c6f\": container with ID starting with f8af5327e4adae2fb872968e4194451156796e6686e87282eb5d68372b253c6f not found: ID does not exist" containerID="f8af5327e4adae2fb872968e4194451156796e6686e87282eb5d68372b253c6f" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.663910 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8af5327e4adae2fb872968e4194451156796e6686e87282eb5d68372b253c6f"} err="failed to get container status \"f8af5327e4adae2fb872968e4194451156796e6686e87282eb5d68372b253c6f\": rpc error: code = NotFound desc = could not find container \"f8af5327e4adae2fb872968e4194451156796e6686e87282eb5d68372b253c6f\": container with ID starting with f8af5327e4adae2fb872968e4194451156796e6686e87282eb5d68372b253c6f not found: ID does not exist" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.663938 4835 scope.go:117] "RemoveContainer" containerID="ea9eb25ea57ec59487089697cd0a93c6a354062e1d49d61292dba5c1b1ae5322" Feb 16 15:28:27 crc kubenswrapper[4835]: E0216 15:28:27.664236 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea9eb25ea57ec59487089697cd0a93c6a354062e1d49d61292dba5c1b1ae5322\": container with ID starting with ea9eb25ea57ec59487089697cd0a93c6a354062e1d49d61292dba5c1b1ae5322 not found: ID does not exist" containerID="ea9eb25ea57ec59487089697cd0a93c6a354062e1d49d61292dba5c1b1ae5322" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.664372 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea9eb25ea57ec59487089697cd0a93c6a354062e1d49d61292dba5c1b1ae5322"} err="failed to get container status \"ea9eb25ea57ec59487089697cd0a93c6a354062e1d49d61292dba5c1b1ae5322\": rpc error: code = NotFound desc = could not find container \"ea9eb25ea57ec59487089697cd0a93c6a354062e1d49d61292dba5c1b1ae5322\": container with ID starting with ea9eb25ea57ec59487089697cd0a93c6a354062e1d49d61292dba5c1b1ae5322 not found: ID does not exist" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.670889 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1be0de54-02e9-4cfa-9fef-b5e5c00bd572" (UID: "1be0de54-02e9-4cfa-9fef-b5e5c00bd572"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.692614 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1be0de54-02e9-4cfa-9fef-b5e5c00bd572" (UID: "1be0de54-02e9-4cfa-9fef-b5e5c00bd572"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.697289 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1be0de54-02e9-4cfa-9fef-b5e5c00bd572" (UID: "1be0de54-02e9-4cfa-9fef-b5e5c00bd572"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.699416 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-config" (OuterVolumeSpecName: "config") pod "1be0de54-02e9-4cfa-9fef-b5e5c00bd572" (UID: "1be0de54-02e9-4cfa-9fef-b5e5c00bd572"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.718836 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1be0de54-02e9-4cfa-9fef-b5e5c00bd572" (UID: "1be0de54-02e9-4cfa-9fef-b5e5c00bd572"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.720236 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.720260 4835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.720270 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.720280 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8csxp\" (UniqueName: \"kubernetes.io/projected/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-kube-api-access-8csxp\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.720291 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.720300 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1be0de54-02e9-4cfa-9fef-b5e5c00bd572-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.874712 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-bc6t7" podStartSLOduration=2.874690968 podStartE2EDuration="2.874690968s" podCreationTimestamp="2026-02-16 15:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:28:27.574180034 +0000 UTC m=+1256.866172919" watchObservedRunningTime="2026-02-16 15:28:27.874690968 +0000 UTC m=+1257.166683863" Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.875742 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-slzml"] Feb 16 15:28:27 crc kubenswrapper[4835]: I0216 15:28:27.887907 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-slzml"] Feb 16 15:28:28 crc kubenswrapper[4835]: I0216 15:28:28.566485 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b51fc856-a532-470f-aa5e-349bc749062b","Type":"ContainerStarted","Data":"dff11f7478b5a346b877278532a8831b895fa25ac1b6f325186b5a6bc7034031"} Feb 16 15:28:28 crc kubenswrapper[4835]: I0216 15:28:28.566766 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:28:28 crc kubenswrapper[4835]: I0216 15:28:28.598931 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7089077059999997 podStartE2EDuration="7.598905103s" podCreationTimestamp="2026-02-16 15:28:21 +0000 UTC" firstStartedPulling="2026-02-16 15:28:22.377447321 +0000 UTC m=+1251.669440256" lastFinishedPulling="2026-02-16 15:28:27.267444758 +0000 UTC m=+1256.559437653" observedRunningTime="2026-02-16 15:28:28.585246837 +0000 UTC m=+1257.877239752" watchObservedRunningTime="2026-02-16 15:28:28.598905103 +0000 UTC m=+1257.890898038" Feb 16 15:28:29 crc kubenswrapper[4835]: I0216 15:28:29.405915 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1be0de54-02e9-4cfa-9fef-b5e5c00bd572" path="/var/lib/kubelet/pods/1be0de54-02e9-4cfa-9fef-b5e5c00bd572/volumes" Feb 16 15:28:31 crc kubenswrapper[4835]: I0216 15:28:31.601716 4835 generic.go:334] "Generic (PLEG): container finished" podID="ad3513d5-d2c9-48aa-8264-a7728591bf53" containerID="b7a4d0123bb745b3d3a98a494901c671a07757f07dc33822953be3ea70cc95ee" exitCode=0 Feb 16 15:28:31 crc kubenswrapper[4835]: I0216 15:28:31.601949 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bc6t7" event={"ID":"ad3513d5-d2c9-48aa-8264-a7728591bf53","Type":"ContainerDied","Data":"b7a4d0123bb745b3d3a98a494901c671a07757f07dc33822953be3ea70cc95ee"} Feb 16 15:28:31 crc kubenswrapper[4835]: E0216 15:28:31.662240 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad3513d5_d2c9_48aa_8264_a7728591bf53.slice/crio-b7a4d0123bb745b3d3a98a494901c671a07757f07dc33822953be3ea70cc95ee.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.119842 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bc6t7" Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.273914 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj789\" (UniqueName: \"kubernetes.io/projected/ad3513d5-d2c9-48aa-8264-a7728591bf53-kube-api-access-zj789\") pod \"ad3513d5-d2c9-48aa-8264-a7728591bf53\" (UID: \"ad3513d5-d2c9-48aa-8264-a7728591bf53\") " Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.274027 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-scripts\") pod \"ad3513d5-d2c9-48aa-8264-a7728591bf53\" (UID: \"ad3513d5-d2c9-48aa-8264-a7728591bf53\") " Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.274051 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-config-data\") pod \"ad3513d5-d2c9-48aa-8264-a7728591bf53\" (UID: \"ad3513d5-d2c9-48aa-8264-a7728591bf53\") " Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.274282 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-combined-ca-bundle\") pod \"ad3513d5-d2c9-48aa-8264-a7728591bf53\" (UID: \"ad3513d5-d2c9-48aa-8264-a7728591bf53\") " Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.282596 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad3513d5-d2c9-48aa-8264-a7728591bf53-kube-api-access-zj789" (OuterVolumeSpecName: "kube-api-access-zj789") pod "ad3513d5-d2c9-48aa-8264-a7728591bf53" (UID: "ad3513d5-d2c9-48aa-8264-a7728591bf53"). InnerVolumeSpecName "kube-api-access-zj789". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.292700 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-scripts" (OuterVolumeSpecName: "scripts") pod "ad3513d5-d2c9-48aa-8264-a7728591bf53" (UID: "ad3513d5-d2c9-48aa-8264-a7728591bf53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.303944 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad3513d5-d2c9-48aa-8264-a7728591bf53" (UID: "ad3513d5-d2c9-48aa-8264-a7728591bf53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.352873 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-config-data" (OuterVolumeSpecName: "config-data") pod "ad3513d5-d2c9-48aa-8264-a7728591bf53" (UID: "ad3513d5-d2c9-48aa-8264-a7728591bf53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.376545 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj789\" (UniqueName: \"kubernetes.io/projected/ad3513d5-d2c9-48aa-8264-a7728591bf53-kube-api-access-zj789\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.376582 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.376591 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.376602 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad3513d5-d2c9-48aa-8264-a7728591bf53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:33 crc kubenswrapper[4835]: E0216 15:28:33.385724 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.622626 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bc6t7" event={"ID":"ad3513d5-d2c9-48aa-8264-a7728591bf53","Type":"ContainerDied","Data":"ad062b819be80e4f130d9c204977fd3290e6cd2ea30d97692b4aff29d4012833"} Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.622667 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad062b819be80e4f130d9c204977fd3290e6cd2ea30d97692b4aff29d4012833" Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.622723 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bc6t7" Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.793986 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.794045 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.794069 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.794285 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f0238764-cfcf-4828-8c5c-3f1e6d31a222" containerName="nova-api-log" containerID="cri-o://95646521532be8da4580621f1a1eceb6a8eb04bd44718413fcd34c4cb0e91fa2" gracePeriod=30 Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.794324 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f0238764-cfcf-4828-8c5c-3f1e6d31a222" containerName="nova-api-api" containerID="cri-o://88618b7fef8d799957afd0ebf25ad781d7bf8da42a591eb1fd282731c176417d" gracePeriod=30 Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.810971 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f0238764-cfcf-4828-8c5c-3f1e6d31a222" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": EOF" Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.811054 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f0238764-cfcf-4828-8c5c-3f1e6d31a222" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": EOF" Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.812494 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.812875 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="72daa885-8d90-44e7-af41-31467c9e0643" containerName="nova-scheduler-scheduler" containerID="cri-o://56434d808d9111c94b448ca38f358f94fe1ebf96669135267d6a8c1148933b51" gracePeriod=30 Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.912206 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.912424 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" containerName="nova-metadata-log" containerID="cri-o://fd8bdc5408cc076b9ef549659ce0bf17bbccc0320c999e4e4f1aa47855d60d34" gracePeriod=30 Feb 16 15:28:33 crc kubenswrapper[4835]: I0216 15:28:33.912897 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" containerName="nova-metadata-metadata" containerID="cri-o://601ef8234e156a07ea717260fea7caebf814fbafb0a6536a19fa3c18d04d71d3" gracePeriod=30 Feb 16 15:28:34 crc kubenswrapper[4835]: I0216 15:28:34.634128 4835 generic.go:334] "Generic (PLEG): container finished" podID="f0238764-cfcf-4828-8c5c-3f1e6d31a222" containerID="95646521532be8da4580621f1a1eceb6a8eb04bd44718413fcd34c4cb0e91fa2" exitCode=143 Feb 16 15:28:34 crc kubenswrapper[4835]: I0216 15:28:34.634200 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0238764-cfcf-4828-8c5c-3f1e6d31a222","Type":"ContainerDied","Data":"95646521532be8da4580621f1a1eceb6a8eb04bd44718413fcd34c4cb0e91fa2"} Feb 16 15:28:34 crc kubenswrapper[4835]: I0216 15:28:34.637673 4835 generic.go:334] "Generic (PLEG): container finished" podID="d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" containerID="fd8bdc5408cc076b9ef549659ce0bf17bbccc0320c999e4e4f1aa47855d60d34" exitCode=143 Feb 16 15:28:34 crc kubenswrapper[4835]: I0216 15:28:34.637725 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6","Type":"ContainerDied","Data":"fd8bdc5408cc076b9ef549659ce0bf17bbccc0320c999e4e4f1aa47855d60d34"} Feb 16 15:28:35 crc kubenswrapper[4835]: E0216 15:28:35.377406 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56434d808d9111c94b448ca38f358f94fe1ebf96669135267d6a8c1148933b51" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:28:35 crc kubenswrapper[4835]: E0216 15:28:35.378887 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56434d808d9111c94b448ca38f358f94fe1ebf96669135267d6a8c1148933b51" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:28:35 crc kubenswrapper[4835]: E0216 15:28:35.383687 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56434d808d9111c94b448ca38f358f94fe1ebf96669135267d6a8c1148933b51" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:28:35 crc kubenswrapper[4835]: E0216 15:28:35.383740 4835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="72daa885-8d90-44e7-af41-31467c9e0643" containerName="nova-scheduler-scheduler" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.050323 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": read tcp 10.217.0.2:40124->10.217.0.213:8775: read: connection reset by peer" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.050772 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": read tcp 10.217.0.2:40122->10.217.0.213:8775: read: connection reset by peer" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.503887 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.665952 4835 generic.go:334] "Generic (PLEG): container finished" podID="d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" containerID="601ef8234e156a07ea717260fea7caebf814fbafb0a6536a19fa3c18d04d71d3" exitCode=0 Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.665993 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6","Type":"ContainerDied","Data":"601ef8234e156a07ea717260fea7caebf814fbafb0a6536a19fa3c18d04d71d3"} Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.666014 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.666030 4835 scope.go:117] "RemoveContainer" containerID="601ef8234e156a07ea717260fea7caebf814fbafb0a6536a19fa3c18d04d71d3" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.666019 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6","Type":"ContainerDied","Data":"38ef5bcef4698514390792c5d871dd01e8684173c8741dad6434a6bc173eea6e"} Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.678333 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq8zl\" (UniqueName: \"kubernetes.io/projected/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-kube-api-access-fq8zl\") pod \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.678451 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-logs\") pod \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.678495 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-combined-ca-bundle\") pod \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.678866 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-logs" (OuterVolumeSpecName: "logs") pod "d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" (UID: "d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.678929 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-config-data\") pod \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.679011 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-nova-metadata-tls-certs\") pod \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\" (UID: \"d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6\") " Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.679581 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.684060 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-kube-api-access-fq8zl" (OuterVolumeSpecName: "kube-api-access-fq8zl") pod "d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" (UID: "d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6"). InnerVolumeSpecName "kube-api-access-fq8zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.689178 4835 scope.go:117] "RemoveContainer" containerID="fd8bdc5408cc076b9ef549659ce0bf17bbccc0320c999e4e4f1aa47855d60d34" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.722394 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-config-data" (OuterVolumeSpecName: "config-data") pod "d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" (UID: "d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.731722 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" (UID: "d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.742055 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" (UID: "d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.782217 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq8zl\" (UniqueName: \"kubernetes.io/projected/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-kube-api-access-fq8zl\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.782490 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.782500 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.782508 4835 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.808582 4835 scope.go:117] "RemoveContainer" containerID="601ef8234e156a07ea717260fea7caebf814fbafb0a6536a19fa3c18d04d71d3" Feb 16 15:28:37 crc kubenswrapper[4835]: E0216 15:28:37.809010 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"601ef8234e156a07ea717260fea7caebf814fbafb0a6536a19fa3c18d04d71d3\": container with ID starting with 601ef8234e156a07ea717260fea7caebf814fbafb0a6536a19fa3c18d04d71d3 not found: ID does not exist" containerID="601ef8234e156a07ea717260fea7caebf814fbafb0a6536a19fa3c18d04d71d3" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.809051 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"601ef8234e156a07ea717260fea7caebf814fbafb0a6536a19fa3c18d04d71d3"} err="failed to get container status \"601ef8234e156a07ea717260fea7caebf814fbafb0a6536a19fa3c18d04d71d3\": rpc error: code = NotFound desc = could not find container \"601ef8234e156a07ea717260fea7caebf814fbafb0a6536a19fa3c18d04d71d3\": container with ID starting with 601ef8234e156a07ea717260fea7caebf814fbafb0a6536a19fa3c18d04d71d3 not found: ID does not exist" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.809082 4835 scope.go:117] "RemoveContainer" containerID="fd8bdc5408cc076b9ef549659ce0bf17bbccc0320c999e4e4f1aa47855d60d34" Feb 16 15:28:37 crc kubenswrapper[4835]: E0216 15:28:37.809351 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd8bdc5408cc076b9ef549659ce0bf17bbccc0320c999e4e4f1aa47855d60d34\": container with ID starting with fd8bdc5408cc076b9ef549659ce0bf17bbccc0320c999e4e4f1aa47855d60d34 not found: ID does not exist" containerID="fd8bdc5408cc076b9ef549659ce0bf17bbccc0320c999e4e4f1aa47855d60d34" Feb 16 15:28:37 crc kubenswrapper[4835]: I0216 15:28:37.809380 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8bdc5408cc076b9ef549659ce0bf17bbccc0320c999e4e4f1aa47855d60d34"} err="failed to get container status \"fd8bdc5408cc076b9ef549659ce0bf17bbccc0320c999e4e4f1aa47855d60d34\": rpc error: code = NotFound desc = could not find container \"fd8bdc5408cc076b9ef549659ce0bf17bbccc0320c999e4e4f1aa47855d60d34\": container with ID starting with fd8bdc5408cc076b9ef549659ce0bf17bbccc0320c999e4e4f1aa47855d60d34 not found: ID does not exist" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.041815 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.055062 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.067059 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:28:38 crc kubenswrapper[4835]: E0216 15:28:38.067944 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be0de54-02e9-4cfa-9fef-b5e5c00bd572" containerName="init" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.067960 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be0de54-02e9-4cfa-9fef-b5e5c00bd572" containerName="init" Feb 16 15:28:38 crc kubenswrapper[4835]: E0216 15:28:38.067973 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" containerName="nova-metadata-metadata" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.067984 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" containerName="nova-metadata-metadata" Feb 16 15:28:38 crc kubenswrapper[4835]: E0216 15:28:38.067997 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3513d5-d2c9-48aa-8264-a7728591bf53" containerName="nova-manage" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.068005 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3513d5-d2c9-48aa-8264-a7728591bf53" containerName="nova-manage" Feb 16 15:28:38 crc kubenswrapper[4835]: E0216 15:28:38.068038 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" containerName="nova-metadata-log" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.068044 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" containerName="nova-metadata-log" Feb 16 15:28:38 crc kubenswrapper[4835]: E0216 15:28:38.068062 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be0de54-02e9-4cfa-9fef-b5e5c00bd572" containerName="dnsmasq-dns" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.068067 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be0de54-02e9-4cfa-9fef-b5e5c00bd572" containerName="dnsmasq-dns" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.068413 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" containerName="nova-metadata-metadata" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.068428 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad3513d5-d2c9-48aa-8264-a7728591bf53" containerName="nova-manage" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.068505 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" containerName="nova-metadata-log" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.068518 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be0de54-02e9-4cfa-9fef-b5e5c00bd572" containerName="dnsmasq-dns" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.070354 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.075651 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.075829 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.081096 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.191035 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dc85a39-1037-45d8-9221-f4d30e0b01f7-logs\") pod \"nova-metadata-0\" (UID: \"7dc85a39-1037-45d8-9221-f4d30e0b01f7\") " pod="openstack/nova-metadata-0" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.191078 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc85a39-1037-45d8-9221-f4d30e0b01f7-config-data\") pod \"nova-metadata-0\" (UID: \"7dc85a39-1037-45d8-9221-f4d30e0b01f7\") " pod="openstack/nova-metadata-0" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.191097 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc85a39-1037-45d8-9221-f4d30e0b01f7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7dc85a39-1037-45d8-9221-f4d30e0b01f7\") " pod="openstack/nova-metadata-0" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.191143 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dc85a39-1037-45d8-9221-f4d30e0b01f7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7dc85a39-1037-45d8-9221-f4d30e0b01f7\") " pod="openstack/nova-metadata-0" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.191718 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r9gz\" (UniqueName: \"kubernetes.io/projected/7dc85a39-1037-45d8-9221-f4d30e0b01f7-kube-api-access-5r9gz\") pod \"nova-metadata-0\" (UID: \"7dc85a39-1037-45d8-9221-f4d30e0b01f7\") " pod="openstack/nova-metadata-0" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.293636 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dc85a39-1037-45d8-9221-f4d30e0b01f7-logs\") pod \"nova-metadata-0\" (UID: \"7dc85a39-1037-45d8-9221-f4d30e0b01f7\") " pod="openstack/nova-metadata-0" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.293679 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc85a39-1037-45d8-9221-f4d30e0b01f7-config-data\") pod \"nova-metadata-0\" (UID: \"7dc85a39-1037-45d8-9221-f4d30e0b01f7\") " pod="openstack/nova-metadata-0" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.293698 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc85a39-1037-45d8-9221-f4d30e0b01f7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7dc85a39-1037-45d8-9221-f4d30e0b01f7\") " pod="openstack/nova-metadata-0" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.293747 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dc85a39-1037-45d8-9221-f4d30e0b01f7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7dc85a39-1037-45d8-9221-f4d30e0b01f7\") " pod="openstack/nova-metadata-0" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.293852 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r9gz\" (UniqueName: \"kubernetes.io/projected/7dc85a39-1037-45d8-9221-f4d30e0b01f7-kube-api-access-5r9gz\") pod \"nova-metadata-0\" (UID: \"7dc85a39-1037-45d8-9221-f4d30e0b01f7\") " pod="openstack/nova-metadata-0" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.294392 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dc85a39-1037-45d8-9221-f4d30e0b01f7-logs\") pod \"nova-metadata-0\" (UID: \"7dc85a39-1037-45d8-9221-f4d30e0b01f7\") " pod="openstack/nova-metadata-0" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.298995 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dc85a39-1037-45d8-9221-f4d30e0b01f7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7dc85a39-1037-45d8-9221-f4d30e0b01f7\") " pod="openstack/nova-metadata-0" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.299121 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc85a39-1037-45d8-9221-f4d30e0b01f7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7dc85a39-1037-45d8-9221-f4d30e0b01f7\") " pod="openstack/nova-metadata-0" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.299229 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc85a39-1037-45d8-9221-f4d30e0b01f7-config-data\") pod \"nova-metadata-0\" (UID: \"7dc85a39-1037-45d8-9221-f4d30e0b01f7\") " pod="openstack/nova-metadata-0" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.310233 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r9gz\" (UniqueName: \"kubernetes.io/projected/7dc85a39-1037-45d8-9221-f4d30e0b01f7-kube-api-access-5r9gz\") pod \"nova-metadata-0\" (UID: \"7dc85a39-1037-45d8-9221-f4d30e0b01f7\") " pod="openstack/nova-metadata-0" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.400989 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:28:38 crc kubenswrapper[4835]: I0216 15:28:38.894631 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:28:38 crc kubenswrapper[4835]: W0216 15:28:38.900175 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dc85a39_1037_45d8_9221_f4d30e0b01f7.slice/crio-6ec3efe601c9fe1f296eae251632e757fc1403c1d77c17cdb8cfbff24bf88e82 WatchSource:0}: Error finding container 6ec3efe601c9fe1f296eae251632e757fc1403c1d77c17cdb8cfbff24bf88e82: Status 404 returned error can't find the container with id 6ec3efe601c9fe1f296eae251632e757fc1403c1d77c17cdb8cfbff24bf88e82 Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.401338 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6" path="/var/lib/kubelet/pods/d78d813f-018e-4d3f-8b8d-d5d6a08b3fe6/volumes" Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.692664 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7dc85a39-1037-45d8-9221-f4d30e0b01f7","Type":"ContainerStarted","Data":"4e06059933a3507d582c2d59688eb84992535c2a9e9083397a62074cbbe31a63"} Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.692706 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7dc85a39-1037-45d8-9221-f4d30e0b01f7","Type":"ContainerStarted","Data":"840349e7834fe615d000ee33d811affd6cbeb9b2b56a010c86222cdcd468fadd"} Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.692716 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7dc85a39-1037-45d8-9221-f4d30e0b01f7","Type":"ContainerStarted","Data":"6ec3efe601c9fe1f296eae251632e757fc1403c1d77c17cdb8cfbff24bf88e82"} Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.696076 4835 generic.go:334] "Generic (PLEG): container finished" podID="72daa885-8d90-44e7-af41-31467c9e0643" containerID="56434d808d9111c94b448ca38f358f94fe1ebf96669135267d6a8c1148933b51" exitCode=0 Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.696126 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"72daa885-8d90-44e7-af41-31467c9e0643","Type":"ContainerDied","Data":"56434d808d9111c94b448ca38f358f94fe1ebf96669135267d6a8c1148933b51"} Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.696161 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"72daa885-8d90-44e7-af41-31467c9e0643","Type":"ContainerDied","Data":"67eeae2df0d4dea4823e97c6b558264a4d2a6a0364d26221bfe04505e1e2ed1b"} Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.696175 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67eeae2df0d4dea4823e97c6b558264a4d2a6a0364d26221bfe04505e1e2ed1b" Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.712639 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.7126235909999998 podStartE2EDuration="1.712623591s" podCreationTimestamp="2026-02-16 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:28:39.71067719 +0000 UTC m=+1269.002670085" watchObservedRunningTime="2026-02-16 15:28:39.712623591 +0000 UTC m=+1269.004616486" Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.713158 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.765994 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72daa885-8d90-44e7-af41-31467c9e0643-combined-ca-bundle\") pod \"72daa885-8d90-44e7-af41-31467c9e0643\" (UID: \"72daa885-8d90-44e7-af41-31467c9e0643\") " Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.766555 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzhnb\" (UniqueName: \"kubernetes.io/projected/72daa885-8d90-44e7-af41-31467c9e0643-kube-api-access-wzhnb\") pod \"72daa885-8d90-44e7-af41-31467c9e0643\" (UID: \"72daa885-8d90-44e7-af41-31467c9e0643\") " Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.766655 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72daa885-8d90-44e7-af41-31467c9e0643-config-data\") pod \"72daa885-8d90-44e7-af41-31467c9e0643\" (UID: \"72daa885-8d90-44e7-af41-31467c9e0643\") " Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.770150 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72daa885-8d90-44e7-af41-31467c9e0643-kube-api-access-wzhnb" (OuterVolumeSpecName: "kube-api-access-wzhnb") pod "72daa885-8d90-44e7-af41-31467c9e0643" (UID: "72daa885-8d90-44e7-af41-31467c9e0643"). InnerVolumeSpecName "kube-api-access-wzhnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.810622 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72daa885-8d90-44e7-af41-31467c9e0643-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72daa885-8d90-44e7-af41-31467c9e0643" (UID: "72daa885-8d90-44e7-af41-31467c9e0643"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.814985 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72daa885-8d90-44e7-af41-31467c9e0643-config-data" (OuterVolumeSpecName: "config-data") pod "72daa885-8d90-44e7-af41-31467c9e0643" (UID: "72daa885-8d90-44e7-af41-31467c9e0643"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.868672 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72daa885-8d90-44e7-af41-31467c9e0643-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.868705 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzhnb\" (UniqueName: \"kubernetes.io/projected/72daa885-8d90-44e7-af41-31467c9e0643-kube-api-access-wzhnb\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:39 crc kubenswrapper[4835]: I0216 15:28:39.868718 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72daa885-8d90-44e7-af41-31467c9e0643-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.707404 4835 generic.go:334] "Generic (PLEG): container finished" podID="f0238764-cfcf-4828-8c5c-3f1e6d31a222" containerID="88618b7fef8d799957afd0ebf25ad781d7bf8da42a591eb1fd282731c176417d" exitCode=0 Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.707488 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.707482 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0238764-cfcf-4828-8c5c-3f1e6d31a222","Type":"ContainerDied","Data":"88618b7fef8d799957afd0ebf25ad781d7bf8da42a591eb1fd282731c176417d"} Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.741780 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.750795 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.769120 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:28:40 crc kubenswrapper[4835]: E0216 15:28:40.769595 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72daa885-8d90-44e7-af41-31467c9e0643" containerName="nova-scheduler-scheduler" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.769606 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="72daa885-8d90-44e7-af41-31467c9e0643" containerName="nova-scheduler-scheduler" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.769796 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="72daa885-8d90-44e7-af41-31467c9e0643" containerName="nova-scheduler-scheduler" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.770626 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.772110 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.780939 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.789699 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8781e3b3-2b5c-4a33-9cb4-f21080cd6743-config-data\") pod \"nova-scheduler-0\" (UID: \"8781e3b3-2b5c-4a33-9cb4-f21080cd6743\") " pod="openstack/nova-scheduler-0" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.789903 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t45t6\" (UniqueName: \"kubernetes.io/projected/8781e3b3-2b5c-4a33-9cb4-f21080cd6743-kube-api-access-t45t6\") pod \"nova-scheduler-0\" (UID: \"8781e3b3-2b5c-4a33-9cb4-f21080cd6743\") " pod="openstack/nova-scheduler-0" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.789937 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8781e3b3-2b5c-4a33-9cb4-f21080cd6743-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8781e3b3-2b5c-4a33-9cb4-f21080cd6743\") " pod="openstack/nova-scheduler-0" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.850362 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.891032 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-config-data\") pod \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.891309 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-public-tls-certs\") pod \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.891600 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-internal-tls-certs\") pod \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.891678 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0238764-cfcf-4828-8c5c-3f1e6d31a222-logs\") pod \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.891765 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-combined-ca-bundle\") pod \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.891892 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bzwp\" (UniqueName: \"kubernetes.io/projected/f0238764-cfcf-4828-8c5c-3f1e6d31a222-kube-api-access-6bzwp\") pod \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\" (UID: \"f0238764-cfcf-4828-8c5c-3f1e6d31a222\") " Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.892162 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t45t6\" (UniqueName: \"kubernetes.io/projected/8781e3b3-2b5c-4a33-9cb4-f21080cd6743-kube-api-access-t45t6\") pod \"nova-scheduler-0\" (UID: \"8781e3b3-2b5c-4a33-9cb4-f21080cd6743\") " pod="openstack/nova-scheduler-0" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.892242 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8781e3b3-2b5c-4a33-9cb4-f21080cd6743-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8781e3b3-2b5c-4a33-9cb4-f21080cd6743\") " pod="openstack/nova-scheduler-0" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.892405 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8781e3b3-2b5c-4a33-9cb4-f21080cd6743-config-data\") pod \"nova-scheduler-0\" (UID: \"8781e3b3-2b5c-4a33-9cb4-f21080cd6743\") " pod="openstack/nova-scheduler-0" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.895952 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0238764-cfcf-4828-8c5c-3f1e6d31a222-kube-api-access-6bzwp" (OuterVolumeSpecName: "kube-api-access-6bzwp") pod "f0238764-cfcf-4828-8c5c-3f1e6d31a222" (UID: "f0238764-cfcf-4828-8c5c-3f1e6d31a222"). InnerVolumeSpecName "kube-api-access-6bzwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.896222 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0238764-cfcf-4828-8c5c-3f1e6d31a222-logs" (OuterVolumeSpecName: "logs") pod "f0238764-cfcf-4828-8c5c-3f1e6d31a222" (UID: "f0238764-cfcf-4828-8c5c-3f1e6d31a222"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.896704 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8781e3b3-2b5c-4a33-9cb4-f21080cd6743-config-data\") pod \"nova-scheduler-0\" (UID: \"8781e3b3-2b5c-4a33-9cb4-f21080cd6743\") " pod="openstack/nova-scheduler-0" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.898388 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8781e3b3-2b5c-4a33-9cb4-f21080cd6743-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8781e3b3-2b5c-4a33-9cb4-f21080cd6743\") " pod="openstack/nova-scheduler-0" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.910634 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t45t6\" (UniqueName: \"kubernetes.io/projected/8781e3b3-2b5c-4a33-9cb4-f21080cd6743-kube-api-access-t45t6\") pod \"nova-scheduler-0\" (UID: \"8781e3b3-2b5c-4a33-9cb4-f21080cd6743\") " pod="openstack/nova-scheduler-0" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.922261 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0238764-cfcf-4828-8c5c-3f1e6d31a222" (UID: "f0238764-cfcf-4828-8c5c-3f1e6d31a222"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.927306 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-config-data" (OuterVolumeSpecName: "config-data") pod "f0238764-cfcf-4828-8c5c-3f1e6d31a222" (UID: "f0238764-cfcf-4828-8c5c-3f1e6d31a222"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.947711 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f0238764-cfcf-4828-8c5c-3f1e6d31a222" (UID: "f0238764-cfcf-4828-8c5c-3f1e6d31a222"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.948756 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f0238764-cfcf-4828-8c5c-3f1e6d31a222" (UID: "f0238764-cfcf-4828-8c5c-3f1e6d31a222"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.994158 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bzwp\" (UniqueName: \"kubernetes.io/projected/f0238764-cfcf-4828-8c5c-3f1e6d31a222-kube-api-access-6bzwp\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.994205 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.994224 4835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.994241 4835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.994258 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0238764-cfcf-4828-8c5c-3f1e6d31a222-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:40 crc kubenswrapper[4835]: I0216 15:28:40.994274 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0238764-cfcf-4828-8c5c-3f1e6d31a222-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.161436 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.420699 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72daa885-8d90-44e7-af41-31467c9e0643" path="/var/lib/kubelet/pods/72daa885-8d90-44e7-af41-31467c9e0643/volumes" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.729464 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0238764-cfcf-4828-8c5c-3f1e6d31a222","Type":"ContainerDied","Data":"55c74c7dc62d8e2941cecc3f5c9281b0397cd3357b29fe6a277b5b082d545347"} Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.729637 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.729650 4835 scope.go:117] "RemoveContainer" containerID="88618b7fef8d799957afd0ebf25ad781d7bf8da42a591eb1fd282731c176417d" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.736732 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.791854 4835 scope.go:117] "RemoveContainer" containerID="95646521532be8da4580621f1a1eceb6a8eb04bd44718413fcd34c4cb0e91fa2" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.792811 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.809831 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.823035 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 15:28:41 crc kubenswrapper[4835]: E0216 15:28:41.823405 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0238764-cfcf-4828-8c5c-3f1e6d31a222" containerName="nova-api-log" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.823421 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0238764-cfcf-4828-8c5c-3f1e6d31a222" containerName="nova-api-log" Feb 16 15:28:41 crc kubenswrapper[4835]: E0216 15:28:41.823441 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0238764-cfcf-4828-8c5c-3f1e6d31a222" containerName="nova-api-api" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.823447 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0238764-cfcf-4828-8c5c-3f1e6d31a222" containerName="nova-api-api" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.823683 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0238764-cfcf-4828-8c5c-3f1e6d31a222" containerName="nova-api-log" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.823703 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0238764-cfcf-4828-8c5c-3f1e6d31a222" containerName="nova-api-api" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.824858 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.830257 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.830483 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.831243 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.834410 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.915013 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04aa3f43-11db-4f49-81bc-a8d6e225020e-public-tls-certs\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.915206 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq4kh\" (UniqueName: \"kubernetes.io/projected/04aa3f43-11db-4f49-81bc-a8d6e225020e-kube-api-access-wq4kh\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.915361 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04aa3f43-11db-4f49-81bc-a8d6e225020e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.915388 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04aa3f43-11db-4f49-81bc-a8d6e225020e-config-data\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.915449 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04aa3f43-11db-4f49-81bc-a8d6e225020e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:41 crc kubenswrapper[4835]: I0216 15:28:41.915624 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04aa3f43-11db-4f49-81bc-a8d6e225020e-logs\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.018854 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04aa3f43-11db-4f49-81bc-a8d6e225020e-public-tls-certs\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.019214 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq4kh\" (UniqueName: \"kubernetes.io/projected/04aa3f43-11db-4f49-81bc-a8d6e225020e-kube-api-access-wq4kh\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.020032 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04aa3f43-11db-4f49-81bc-a8d6e225020e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.020546 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04aa3f43-11db-4f49-81bc-a8d6e225020e-config-data\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.020928 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04aa3f43-11db-4f49-81bc-a8d6e225020e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.021086 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04aa3f43-11db-4f49-81bc-a8d6e225020e-logs\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.021946 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04aa3f43-11db-4f49-81bc-a8d6e225020e-logs\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.022997 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04aa3f43-11db-4f49-81bc-a8d6e225020e-public-tls-certs\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.023322 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04aa3f43-11db-4f49-81bc-a8d6e225020e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.023879 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04aa3f43-11db-4f49-81bc-a8d6e225020e-config-data\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.026119 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04aa3f43-11db-4f49-81bc-a8d6e225020e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.040094 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq4kh\" (UniqueName: \"kubernetes.io/projected/04aa3f43-11db-4f49-81bc-a8d6e225020e-kube-api-access-wq4kh\") pod \"nova-api-0\" (UID: \"04aa3f43-11db-4f49-81bc-a8d6e225020e\") " pod="openstack/nova-api-0" Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.166421 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.626135 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.742866 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04aa3f43-11db-4f49-81bc-a8d6e225020e","Type":"ContainerStarted","Data":"65fc212d050e4f2a5adf821bca959c29b4a9334d6c63c2fd42f97dd60130326c"} Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.750818 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8781e3b3-2b5c-4a33-9cb4-f21080cd6743","Type":"ContainerStarted","Data":"ced4c3a8c742f477ebc379a5d8e71d44a80e6aa310ff9c5e92b4bef6e902c9c4"} Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.750854 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8781e3b3-2b5c-4a33-9cb4-f21080cd6743","Type":"ContainerStarted","Data":"2c8dd8fe41617c131a7d34049c8fd3e81401a63b801490979fa92459cf0c1d90"} Feb 16 15:28:42 crc kubenswrapper[4835]: I0216 15:28:42.777373 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.7773533649999997 podStartE2EDuration="2.777353365s" podCreationTimestamp="2026-02-16 15:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:28:42.76869636 +0000 UTC m=+1272.060689245" watchObservedRunningTime="2026-02-16 15:28:42.777353365 +0000 UTC m=+1272.069346260" Feb 16 15:28:43 crc kubenswrapper[4835]: I0216 15:28:43.405724 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0238764-cfcf-4828-8c5c-3f1e6d31a222" path="/var/lib/kubelet/pods/f0238764-cfcf-4828-8c5c-3f1e6d31a222/volumes" Feb 16 15:28:43 crc kubenswrapper[4835]: I0216 15:28:43.407087 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:28:43 crc kubenswrapper[4835]: I0216 15:28:43.407120 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:28:43 crc kubenswrapper[4835]: I0216 15:28:43.765058 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04aa3f43-11db-4f49-81bc-a8d6e225020e","Type":"ContainerStarted","Data":"113f5be9a97370ee2e26acbc14135059d5b5e577e9ac18828fa117ec41e085e4"} Feb 16 15:28:43 crc kubenswrapper[4835]: I0216 15:28:43.765164 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04aa3f43-11db-4f49-81bc-a8d6e225020e","Type":"ContainerStarted","Data":"df2b245a1b3a7cc2b772c49a5faf45206234f8cd1ba9d233b1cde3537eba09a8"} Feb 16 15:28:43 crc kubenswrapper[4835]: I0216 15:28:43.805686 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.805660508 podStartE2EDuration="2.805660508s" podCreationTimestamp="2026-02-16 15:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:28:43.792632719 +0000 UTC m=+1273.084625654" watchObservedRunningTime="2026-02-16 15:28:43.805660508 +0000 UTC m=+1273.097653423" Feb 16 15:28:46 crc kubenswrapper[4835]: I0216 15:28:46.162182 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 15:28:47 crc kubenswrapper[4835]: E0216 15:28:47.382278 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:28:48 crc kubenswrapper[4835]: I0216 15:28:48.401743 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 15:28:48 crc kubenswrapper[4835]: I0216 15:28:48.401821 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 15:28:49 crc kubenswrapper[4835]: I0216 15:28:49.417665 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7dc85a39-1037-45d8-9221-f4d30e0b01f7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:28:49 crc kubenswrapper[4835]: I0216 15:28:49.417707 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7dc85a39-1037-45d8-9221-f4d30e0b01f7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:28:51 crc kubenswrapper[4835]: I0216 15:28:51.162346 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 15:28:51 crc kubenswrapper[4835]: I0216 15:28:51.212609 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 15:28:51 crc kubenswrapper[4835]: I0216 15:28:51.832318 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 15:28:51 crc kubenswrapper[4835]: I0216 15:28:51.940266 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 15:28:52 crc kubenswrapper[4835]: I0216 15:28:52.166581 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:28:52 crc kubenswrapper[4835]: I0216 15:28:52.166636 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:28:53 crc kubenswrapper[4835]: I0216 15:28:53.178854 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="04aa3f43-11db-4f49-81bc-a8d6e225020e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:28:53 crc kubenswrapper[4835]: I0216 15:28:53.178877 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="04aa3f43-11db-4f49-81bc-a8d6e225020e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:28:58 crc kubenswrapper[4835]: I0216 15:28:58.409355 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 15:28:58 crc kubenswrapper[4835]: I0216 15:28:58.414131 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 15:28:58 crc kubenswrapper[4835]: I0216 15:28:58.415385 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 15:28:58 crc kubenswrapper[4835]: I0216 15:28:58.964547 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 15:29:02 crc kubenswrapper[4835]: I0216 15:29:02.192653 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 15:29:02 crc kubenswrapper[4835]: I0216 15:29:02.193623 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 15:29:02 crc kubenswrapper[4835]: I0216 15:29:02.201602 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 15:29:02 crc kubenswrapper[4835]: I0216 15:29:02.210077 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 15:29:02 crc kubenswrapper[4835]: E0216 15:29:02.391120 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:29:03 crc kubenswrapper[4835]: I0216 15:29:03.001273 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 15:29:03 crc kubenswrapper[4835]: I0216 15:29:03.009199 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 15:29:17 crc kubenswrapper[4835]: E0216 15:29:17.786028 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:29:18 crc kubenswrapper[4835]: I0216 15:29:18.587020 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:29:18 crc kubenswrapper[4835]: I0216 15:29:18.587083 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:29:28 crc kubenswrapper[4835]: E0216 15:29:28.383161 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:29:41 crc kubenswrapper[4835]: E0216 15:29:41.387095 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:29:48 crc kubenswrapper[4835]: I0216 15:29:48.586981 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:29:48 crc kubenswrapper[4835]: I0216 15:29:48.588666 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:29:53 crc kubenswrapper[4835]: E0216 15:29:53.383095 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:30:00 crc kubenswrapper[4835]: I0216 15:30:00.174604 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk"] Feb 16 15:30:00 crc kubenswrapper[4835]: I0216 15:30:00.176594 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" Feb 16 15:30:00 crc kubenswrapper[4835]: I0216 15:30:00.178934 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 15:30:00 crc kubenswrapper[4835]: I0216 15:30:00.179503 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 15:30:00 crc kubenswrapper[4835]: I0216 15:30:00.191430 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk"] Feb 16 15:30:00 crc kubenswrapper[4835]: I0216 15:30:00.290709 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzl2w\" (UniqueName: \"kubernetes.io/projected/0e8298ae-dad5-4acf-b350-128afd0fe3ed-kube-api-access-kzl2w\") pod \"collect-profiles-29520930-44vvk\" (UID: \"0e8298ae-dad5-4acf-b350-128afd0fe3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" Feb 16 15:30:00 crc kubenswrapper[4835]: I0216 15:30:00.290994 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e8298ae-dad5-4acf-b350-128afd0fe3ed-secret-volume\") pod \"collect-profiles-29520930-44vvk\" (UID: \"0e8298ae-dad5-4acf-b350-128afd0fe3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" Feb 16 15:30:00 crc kubenswrapper[4835]: I0216 15:30:00.291257 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e8298ae-dad5-4acf-b350-128afd0fe3ed-config-volume\") pod \"collect-profiles-29520930-44vvk\" (UID: \"0e8298ae-dad5-4acf-b350-128afd0fe3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" Feb 16 15:30:00 crc kubenswrapper[4835]: I0216 15:30:00.393129 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzl2w\" (UniqueName: \"kubernetes.io/projected/0e8298ae-dad5-4acf-b350-128afd0fe3ed-kube-api-access-kzl2w\") pod \"collect-profiles-29520930-44vvk\" (UID: \"0e8298ae-dad5-4acf-b350-128afd0fe3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" Feb 16 15:30:00 crc kubenswrapper[4835]: I0216 15:30:00.393298 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e8298ae-dad5-4acf-b350-128afd0fe3ed-secret-volume\") pod \"collect-profiles-29520930-44vvk\" (UID: \"0e8298ae-dad5-4acf-b350-128afd0fe3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" Feb 16 15:30:00 crc kubenswrapper[4835]: I0216 15:30:00.393355 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e8298ae-dad5-4acf-b350-128afd0fe3ed-config-volume\") pod \"collect-profiles-29520930-44vvk\" (UID: \"0e8298ae-dad5-4acf-b350-128afd0fe3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" Feb 16 15:30:00 crc kubenswrapper[4835]: I0216 15:30:00.394394 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e8298ae-dad5-4acf-b350-128afd0fe3ed-config-volume\") pod \"collect-profiles-29520930-44vvk\" (UID: \"0e8298ae-dad5-4acf-b350-128afd0fe3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" Feb 16 15:30:00 crc kubenswrapper[4835]: I0216 15:30:00.407466 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e8298ae-dad5-4acf-b350-128afd0fe3ed-secret-volume\") pod \"collect-profiles-29520930-44vvk\" (UID: \"0e8298ae-dad5-4acf-b350-128afd0fe3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" Feb 16 15:30:00 crc kubenswrapper[4835]: I0216 15:30:00.417329 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzl2w\" (UniqueName: \"kubernetes.io/projected/0e8298ae-dad5-4acf-b350-128afd0fe3ed-kube-api-access-kzl2w\") pod \"collect-profiles-29520930-44vvk\" (UID: \"0e8298ae-dad5-4acf-b350-128afd0fe3ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" Feb 16 15:30:00 crc kubenswrapper[4835]: I0216 15:30:00.522562 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" Feb 16 15:30:01 crc kubenswrapper[4835]: I0216 15:30:01.039833 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk"] Feb 16 15:30:01 crc kubenswrapper[4835]: W0216 15:30:01.046173 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e8298ae_dad5_4acf_b350_128afd0fe3ed.slice/crio-32169543aa9a0f6e627ebb0191c23839e5208568c63413c44705f8e923bf1f52 WatchSource:0}: Error finding container 32169543aa9a0f6e627ebb0191c23839e5208568c63413c44705f8e923bf1f52: Status 404 returned error can't find the container with id 32169543aa9a0f6e627ebb0191c23839e5208568c63413c44705f8e923bf1f52 Feb 16 15:30:01 crc kubenswrapper[4835]: I0216 15:30:01.317303 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" event={"ID":"0e8298ae-dad5-4acf-b350-128afd0fe3ed","Type":"ContainerStarted","Data":"39fd73db24e2e0ddd902bf5158c334d920392ee46563cb09c9c0ada8dcf8e582"} Feb 16 15:30:01 crc kubenswrapper[4835]: I0216 15:30:01.317758 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" event={"ID":"0e8298ae-dad5-4acf-b350-128afd0fe3ed","Type":"ContainerStarted","Data":"32169543aa9a0f6e627ebb0191c23839e5208568c63413c44705f8e923bf1f52"} Feb 16 15:30:01 crc kubenswrapper[4835]: I0216 15:30:01.341436 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" podStartSLOduration=1.341419281 podStartE2EDuration="1.341419281s" podCreationTimestamp="2026-02-16 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:30:01.33257543 +0000 UTC m=+1350.624568385" watchObservedRunningTime="2026-02-16 15:30:01.341419281 +0000 UTC m=+1350.633412176" Feb 16 15:30:02 crc kubenswrapper[4835]: I0216 15:30:02.327079 4835 generic.go:334] "Generic (PLEG): container finished" podID="0e8298ae-dad5-4acf-b350-128afd0fe3ed" containerID="39fd73db24e2e0ddd902bf5158c334d920392ee46563cb09c9c0ada8dcf8e582" exitCode=0 Feb 16 15:30:02 crc kubenswrapper[4835]: I0216 15:30:02.327143 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" event={"ID":"0e8298ae-dad5-4acf-b350-128afd0fe3ed","Type":"ContainerDied","Data":"39fd73db24e2e0ddd902bf5158c334d920392ee46563cb09c9c0ada8dcf8e582"} Feb 16 15:30:03 crc kubenswrapper[4835]: I0216 15:30:03.706561 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" Feb 16 15:30:03 crc kubenswrapper[4835]: I0216 15:30:03.879984 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e8298ae-dad5-4acf-b350-128afd0fe3ed-secret-volume\") pod \"0e8298ae-dad5-4acf-b350-128afd0fe3ed\" (UID: \"0e8298ae-dad5-4acf-b350-128afd0fe3ed\") " Feb 16 15:30:03 crc kubenswrapper[4835]: I0216 15:30:03.880042 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e8298ae-dad5-4acf-b350-128afd0fe3ed-config-volume\") pod \"0e8298ae-dad5-4acf-b350-128afd0fe3ed\" (UID: \"0e8298ae-dad5-4acf-b350-128afd0fe3ed\") " Feb 16 15:30:03 crc kubenswrapper[4835]: I0216 15:30:03.880110 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzl2w\" (UniqueName: \"kubernetes.io/projected/0e8298ae-dad5-4acf-b350-128afd0fe3ed-kube-api-access-kzl2w\") pod \"0e8298ae-dad5-4acf-b350-128afd0fe3ed\" (UID: \"0e8298ae-dad5-4acf-b350-128afd0fe3ed\") " Feb 16 15:30:03 crc kubenswrapper[4835]: I0216 15:30:03.880820 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e8298ae-dad5-4acf-b350-128afd0fe3ed-config-volume" (OuterVolumeSpecName: "config-volume") pod "0e8298ae-dad5-4acf-b350-128afd0fe3ed" (UID: "0e8298ae-dad5-4acf-b350-128afd0fe3ed"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:30:03 crc kubenswrapper[4835]: I0216 15:30:03.884937 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e8298ae-dad5-4acf-b350-128afd0fe3ed-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0e8298ae-dad5-4acf-b350-128afd0fe3ed" (UID: "0e8298ae-dad5-4acf-b350-128afd0fe3ed"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:30:03 crc kubenswrapper[4835]: I0216 15:30:03.885361 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e8298ae-dad5-4acf-b350-128afd0fe3ed-kube-api-access-kzl2w" (OuterVolumeSpecName: "kube-api-access-kzl2w") pod "0e8298ae-dad5-4acf-b350-128afd0fe3ed" (UID: "0e8298ae-dad5-4acf-b350-128afd0fe3ed"). InnerVolumeSpecName "kube-api-access-kzl2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:30:03 crc kubenswrapper[4835]: I0216 15:30:03.982632 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e8298ae-dad5-4acf-b350-128afd0fe3ed-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:03 crc kubenswrapper[4835]: I0216 15:30:03.982680 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e8298ae-dad5-4acf-b350-128afd0fe3ed-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:03 crc kubenswrapper[4835]: I0216 15:30:03.982692 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzl2w\" (UniqueName: \"kubernetes.io/projected/0e8298ae-dad5-4acf-b350-128afd0fe3ed-kube-api-access-kzl2w\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:04 crc kubenswrapper[4835]: I0216 15:30:04.346089 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" event={"ID":"0e8298ae-dad5-4acf-b350-128afd0fe3ed","Type":"ContainerDied","Data":"32169543aa9a0f6e627ebb0191c23839e5208568c63413c44705f8e923bf1f52"} Feb 16 15:30:04 crc kubenswrapper[4835]: I0216 15:30:04.346382 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32169543aa9a0f6e627ebb0191c23839e5208568c63413c44705f8e923bf1f52" Feb 16 15:30:04 crc kubenswrapper[4835]: I0216 15:30:04.346131 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-44vvk" Feb 16 15:30:06 crc kubenswrapper[4835]: E0216 15:30:06.381785 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:30:18 crc kubenswrapper[4835]: I0216 15:30:18.587177 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:30:18 crc kubenswrapper[4835]: I0216 15:30:18.587950 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:30:18 crc kubenswrapper[4835]: I0216 15:30:18.588018 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:30:18 crc kubenswrapper[4835]: I0216 15:30:18.589139 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb539ff3d97049cb7ff841e79b175fa7a23e4c2b3f278dee053bc66e237d104c"} pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:30:18 crc kubenswrapper[4835]: I0216 15:30:18.589261 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" containerID="cri-o://eb539ff3d97049cb7ff841e79b175fa7a23e4c2b3f278dee053bc66e237d104c" gracePeriod=600 Feb 16 15:30:19 crc kubenswrapper[4835]: I0216 15:30:19.119642 4835 generic.go:334] "Generic (PLEG): container finished" podID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerID="eb539ff3d97049cb7ff841e79b175fa7a23e4c2b3f278dee053bc66e237d104c" exitCode=0 Feb 16 15:30:19 crc kubenswrapper[4835]: I0216 15:30:19.119705 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerDied","Data":"eb539ff3d97049cb7ff841e79b175fa7a23e4c2b3f278dee053bc66e237d104c"} Feb 16 15:30:19 crc kubenswrapper[4835]: I0216 15:30:19.120128 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerStarted","Data":"d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624"} Feb 16 15:30:19 crc kubenswrapper[4835]: I0216 15:30:19.120170 4835 scope.go:117] "RemoveContainer" containerID="55a8425e60a5ca5af019911f05c32c6de22275f80b64e52b734846168a32e3b3" Feb 16 15:30:19 crc kubenswrapper[4835]: E0216 15:30:19.381424 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:30:31 crc kubenswrapper[4835]: I0216 15:30:31.961514 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qvc2n"] Feb 16 15:30:31 crc kubenswrapper[4835]: E0216 15:30:31.962908 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e8298ae-dad5-4acf-b350-128afd0fe3ed" containerName="collect-profiles" Feb 16 15:30:31 crc kubenswrapper[4835]: I0216 15:30:31.962927 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e8298ae-dad5-4acf-b350-128afd0fe3ed" containerName="collect-profiles" Feb 16 15:30:31 crc kubenswrapper[4835]: I0216 15:30:31.963169 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e8298ae-dad5-4acf-b350-128afd0fe3ed" containerName="collect-profiles" Feb 16 15:30:31 crc kubenswrapper[4835]: I0216 15:30:31.965395 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvc2n" Feb 16 15:30:31 crc kubenswrapper[4835]: I0216 15:30:31.976429 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvc2n"] Feb 16 15:30:32 crc kubenswrapper[4835]: I0216 15:30:32.077633 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tsqm\" (UniqueName: \"kubernetes.io/projected/c2a88505-6176-419f-9e3c-112096164d61-kube-api-access-7tsqm\") pod \"redhat-operators-qvc2n\" (UID: \"c2a88505-6176-419f-9e3c-112096164d61\") " pod="openshift-marketplace/redhat-operators-qvc2n" Feb 16 15:30:32 crc kubenswrapper[4835]: I0216 15:30:32.077728 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2a88505-6176-419f-9e3c-112096164d61-catalog-content\") pod \"redhat-operators-qvc2n\" (UID: \"c2a88505-6176-419f-9e3c-112096164d61\") " pod="openshift-marketplace/redhat-operators-qvc2n" Feb 16 15:30:32 crc kubenswrapper[4835]: I0216 15:30:32.077836 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2a88505-6176-419f-9e3c-112096164d61-utilities\") pod \"redhat-operators-qvc2n\" (UID: \"c2a88505-6176-419f-9e3c-112096164d61\") " pod="openshift-marketplace/redhat-operators-qvc2n" Feb 16 15:30:32 crc kubenswrapper[4835]: I0216 15:30:32.179107 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tsqm\" (UniqueName: \"kubernetes.io/projected/c2a88505-6176-419f-9e3c-112096164d61-kube-api-access-7tsqm\") pod \"redhat-operators-qvc2n\" (UID: \"c2a88505-6176-419f-9e3c-112096164d61\") " pod="openshift-marketplace/redhat-operators-qvc2n" Feb 16 15:30:32 crc kubenswrapper[4835]: I0216 15:30:32.179184 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2a88505-6176-419f-9e3c-112096164d61-catalog-content\") pod \"redhat-operators-qvc2n\" (UID: \"c2a88505-6176-419f-9e3c-112096164d61\") " pod="openshift-marketplace/redhat-operators-qvc2n" Feb 16 15:30:32 crc kubenswrapper[4835]: I0216 15:30:32.179276 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2a88505-6176-419f-9e3c-112096164d61-utilities\") pod \"redhat-operators-qvc2n\" (UID: \"c2a88505-6176-419f-9e3c-112096164d61\") " pod="openshift-marketplace/redhat-operators-qvc2n" Feb 16 15:30:32 crc kubenswrapper[4835]: I0216 15:30:32.179726 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2a88505-6176-419f-9e3c-112096164d61-catalog-content\") pod \"redhat-operators-qvc2n\" (UID: \"c2a88505-6176-419f-9e3c-112096164d61\") " pod="openshift-marketplace/redhat-operators-qvc2n" Feb 16 15:30:32 crc kubenswrapper[4835]: I0216 15:30:32.180883 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2a88505-6176-419f-9e3c-112096164d61-utilities\") pod \"redhat-operators-qvc2n\" (UID: \"c2a88505-6176-419f-9e3c-112096164d61\") " pod="openshift-marketplace/redhat-operators-qvc2n" Feb 16 15:30:32 crc kubenswrapper[4835]: I0216 15:30:32.197684 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tsqm\" (UniqueName: \"kubernetes.io/projected/c2a88505-6176-419f-9e3c-112096164d61-kube-api-access-7tsqm\") pod \"redhat-operators-qvc2n\" (UID: \"c2a88505-6176-419f-9e3c-112096164d61\") " pod="openshift-marketplace/redhat-operators-qvc2n" Feb 16 15:30:32 crc kubenswrapper[4835]: I0216 15:30:32.297116 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvc2n" Feb 16 15:30:32 crc kubenswrapper[4835]: W0216 15:30:32.759555 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2a88505_6176_419f_9e3c_112096164d61.slice/crio-27fcf7c514f3b9395502982cec67f019e184e1dd5bf5438981e6e4c365bfcdcc WatchSource:0}: Error finding container 27fcf7c514f3b9395502982cec67f019e184e1dd5bf5438981e6e4c365bfcdcc: Status 404 returned error can't find the container with id 27fcf7c514f3b9395502982cec67f019e184e1dd5bf5438981e6e4c365bfcdcc Feb 16 15:30:32 crc kubenswrapper[4835]: I0216 15:30:32.764419 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvc2n"] Feb 16 15:30:33 crc kubenswrapper[4835]: I0216 15:30:33.306708 4835 generic.go:334] "Generic (PLEG): container finished" podID="c2a88505-6176-419f-9e3c-112096164d61" containerID="dd599b4b2982addd1569d73e8560185aee6680f9c03c7915b9bf442aacace5ae" exitCode=0 Feb 16 15:30:33 crc kubenswrapper[4835]: I0216 15:30:33.306991 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvc2n" event={"ID":"c2a88505-6176-419f-9e3c-112096164d61","Type":"ContainerDied","Data":"dd599b4b2982addd1569d73e8560185aee6680f9c03c7915b9bf442aacace5ae"} Feb 16 15:30:33 crc kubenswrapper[4835]: I0216 15:30:33.307017 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvc2n" event={"ID":"c2a88505-6176-419f-9e3c-112096164d61","Type":"ContainerStarted","Data":"27fcf7c514f3b9395502982cec67f019e184e1dd5bf5438981e6e4c365bfcdcc"} Feb 16 15:30:34 crc kubenswrapper[4835]: I0216 15:30:34.317741 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvc2n" event={"ID":"c2a88505-6176-419f-9e3c-112096164d61","Type":"ContainerStarted","Data":"4c0432753559fdd7a56cf987b7cef7b476f5d46af931da8b162c5bad1350f160"} Feb 16 15:30:34 crc kubenswrapper[4835]: E0216 15:30:34.380114 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:30:37 crc kubenswrapper[4835]: I0216 15:30:37.352666 4835 generic.go:334] "Generic (PLEG): container finished" podID="c2a88505-6176-419f-9e3c-112096164d61" containerID="4c0432753559fdd7a56cf987b7cef7b476f5d46af931da8b162c5bad1350f160" exitCode=0 Feb 16 15:30:37 crc kubenswrapper[4835]: I0216 15:30:37.352728 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvc2n" event={"ID":"c2a88505-6176-419f-9e3c-112096164d61","Type":"ContainerDied","Data":"4c0432753559fdd7a56cf987b7cef7b476f5d46af931da8b162c5bad1350f160"} Feb 16 15:30:38 crc kubenswrapper[4835]: I0216 15:30:38.364471 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvc2n" event={"ID":"c2a88505-6176-419f-9e3c-112096164d61","Type":"ContainerStarted","Data":"d5fe3000793457e20a38056a628fe7fbde1022ccf68df5fac43df51c73bab194"} Feb 16 15:30:38 crc kubenswrapper[4835]: I0216 15:30:38.390570 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qvc2n" podStartSLOduration=2.941927536 podStartE2EDuration="7.390554974s" podCreationTimestamp="2026-02-16 15:30:31 +0000 UTC" firstStartedPulling="2026-02-16 15:30:33.309669475 +0000 UTC m=+1382.601662370" lastFinishedPulling="2026-02-16 15:30:37.758296923 +0000 UTC m=+1387.050289808" observedRunningTime="2026-02-16 15:30:38.386613261 +0000 UTC m=+1387.678606156" watchObservedRunningTime="2026-02-16 15:30:38.390554974 +0000 UTC m=+1387.682547869" Feb 16 15:30:42 crc kubenswrapper[4835]: I0216 15:30:42.297925 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qvc2n" Feb 16 15:30:42 crc kubenswrapper[4835]: I0216 15:30:42.298353 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qvc2n" Feb 16 15:30:43 crc kubenswrapper[4835]: I0216 15:30:43.346936 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qvc2n" podUID="c2a88505-6176-419f-9e3c-112096164d61" containerName="registry-server" probeResult="failure" output=< Feb 16 15:30:43 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Feb 16 15:30:43 crc kubenswrapper[4835]: > Feb 16 15:30:48 crc kubenswrapper[4835]: E0216 15:30:48.379949 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:30:52 crc kubenswrapper[4835]: I0216 15:30:52.457821 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qvc2n" Feb 16 15:30:52 crc kubenswrapper[4835]: I0216 15:30:52.537962 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qvc2n" Feb 16 15:30:52 crc kubenswrapper[4835]: I0216 15:30:52.699038 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvc2n"] Feb 16 15:30:53 crc kubenswrapper[4835]: I0216 15:30:53.530340 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qvc2n" podUID="c2a88505-6176-419f-9e3c-112096164d61" containerName="registry-server" containerID="cri-o://d5fe3000793457e20a38056a628fe7fbde1022ccf68df5fac43df51c73bab194" gracePeriod=2 Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.068498 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvc2n" Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.157700 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tsqm\" (UniqueName: \"kubernetes.io/projected/c2a88505-6176-419f-9e3c-112096164d61-kube-api-access-7tsqm\") pod \"c2a88505-6176-419f-9e3c-112096164d61\" (UID: \"c2a88505-6176-419f-9e3c-112096164d61\") " Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.157904 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2a88505-6176-419f-9e3c-112096164d61-catalog-content\") pod \"c2a88505-6176-419f-9e3c-112096164d61\" (UID: \"c2a88505-6176-419f-9e3c-112096164d61\") " Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.157960 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2a88505-6176-419f-9e3c-112096164d61-utilities\") pod \"c2a88505-6176-419f-9e3c-112096164d61\" (UID: \"c2a88505-6176-419f-9e3c-112096164d61\") " Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.159050 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2a88505-6176-419f-9e3c-112096164d61-utilities" (OuterVolumeSpecName: "utilities") pod "c2a88505-6176-419f-9e3c-112096164d61" (UID: "c2a88505-6176-419f-9e3c-112096164d61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.179820 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2a88505-6176-419f-9e3c-112096164d61-kube-api-access-7tsqm" (OuterVolumeSpecName: "kube-api-access-7tsqm") pod "c2a88505-6176-419f-9e3c-112096164d61" (UID: "c2a88505-6176-419f-9e3c-112096164d61"). InnerVolumeSpecName "kube-api-access-7tsqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.260354 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2a88505-6176-419f-9e3c-112096164d61-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.260577 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tsqm\" (UniqueName: \"kubernetes.io/projected/c2a88505-6176-419f-9e3c-112096164d61-kube-api-access-7tsqm\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.285577 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2a88505-6176-419f-9e3c-112096164d61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2a88505-6176-419f-9e3c-112096164d61" (UID: "c2a88505-6176-419f-9e3c-112096164d61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.362724 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2a88505-6176-419f-9e3c-112096164d61-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.542466 4835 generic.go:334] "Generic (PLEG): container finished" podID="c2a88505-6176-419f-9e3c-112096164d61" containerID="d5fe3000793457e20a38056a628fe7fbde1022ccf68df5fac43df51c73bab194" exitCode=0 Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.542501 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvc2n" event={"ID":"c2a88505-6176-419f-9e3c-112096164d61","Type":"ContainerDied","Data":"d5fe3000793457e20a38056a628fe7fbde1022ccf68df5fac43df51c73bab194"} Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.542582 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvc2n" event={"ID":"c2a88505-6176-419f-9e3c-112096164d61","Type":"ContainerDied","Data":"27fcf7c514f3b9395502982cec67f019e184e1dd5bf5438981e6e4c365bfcdcc"} Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.542605 4835 scope.go:117] "RemoveContainer" containerID="d5fe3000793457e20a38056a628fe7fbde1022ccf68df5fac43df51c73bab194" Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.542604 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvc2n" Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.573962 4835 scope.go:117] "RemoveContainer" containerID="4c0432753559fdd7a56cf987b7cef7b476f5d46af931da8b162c5bad1350f160" Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.605751 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvc2n"] Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.629763 4835 scope.go:117] "RemoveContainer" containerID="dd599b4b2982addd1569d73e8560185aee6680f9c03c7915b9bf442aacace5ae" Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.641568 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qvc2n"] Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.674512 4835 scope.go:117] "RemoveContainer" containerID="d5fe3000793457e20a38056a628fe7fbde1022ccf68df5fac43df51c73bab194" Feb 16 15:30:54 crc kubenswrapper[4835]: E0216 15:30:54.675007 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5fe3000793457e20a38056a628fe7fbde1022ccf68df5fac43df51c73bab194\": container with ID starting with d5fe3000793457e20a38056a628fe7fbde1022ccf68df5fac43df51c73bab194 not found: ID does not exist" containerID="d5fe3000793457e20a38056a628fe7fbde1022ccf68df5fac43df51c73bab194" Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.675041 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5fe3000793457e20a38056a628fe7fbde1022ccf68df5fac43df51c73bab194"} err="failed to get container status \"d5fe3000793457e20a38056a628fe7fbde1022ccf68df5fac43df51c73bab194\": rpc error: code = NotFound desc = could not find container \"d5fe3000793457e20a38056a628fe7fbde1022ccf68df5fac43df51c73bab194\": container with ID starting with d5fe3000793457e20a38056a628fe7fbde1022ccf68df5fac43df51c73bab194 not found: ID does not exist" Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.675063 4835 scope.go:117] "RemoveContainer" containerID="4c0432753559fdd7a56cf987b7cef7b476f5d46af931da8b162c5bad1350f160" Feb 16 15:30:54 crc kubenswrapper[4835]: E0216 15:30:54.675441 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c0432753559fdd7a56cf987b7cef7b476f5d46af931da8b162c5bad1350f160\": container with ID starting with 4c0432753559fdd7a56cf987b7cef7b476f5d46af931da8b162c5bad1350f160 not found: ID does not exist" containerID="4c0432753559fdd7a56cf987b7cef7b476f5d46af931da8b162c5bad1350f160" Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.675501 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c0432753559fdd7a56cf987b7cef7b476f5d46af931da8b162c5bad1350f160"} err="failed to get container status \"4c0432753559fdd7a56cf987b7cef7b476f5d46af931da8b162c5bad1350f160\": rpc error: code = NotFound desc = could not find container \"4c0432753559fdd7a56cf987b7cef7b476f5d46af931da8b162c5bad1350f160\": container with ID starting with 4c0432753559fdd7a56cf987b7cef7b476f5d46af931da8b162c5bad1350f160 not found: ID does not exist" Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.675547 4835 scope.go:117] "RemoveContainer" containerID="dd599b4b2982addd1569d73e8560185aee6680f9c03c7915b9bf442aacace5ae" Feb 16 15:30:54 crc kubenswrapper[4835]: E0216 15:30:54.675834 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd599b4b2982addd1569d73e8560185aee6680f9c03c7915b9bf442aacace5ae\": container with ID starting with dd599b4b2982addd1569d73e8560185aee6680f9c03c7915b9bf442aacace5ae not found: ID does not exist" containerID="dd599b4b2982addd1569d73e8560185aee6680f9c03c7915b9bf442aacace5ae" Feb 16 15:30:54 crc kubenswrapper[4835]: I0216 15:30:54.675859 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd599b4b2982addd1569d73e8560185aee6680f9c03c7915b9bf442aacace5ae"} err="failed to get container status \"dd599b4b2982addd1569d73e8560185aee6680f9c03c7915b9bf442aacace5ae\": rpc error: code = NotFound desc = could not find container \"dd599b4b2982addd1569d73e8560185aee6680f9c03c7915b9bf442aacace5ae\": container with ID starting with dd599b4b2982addd1569d73e8560185aee6680f9c03c7915b9bf442aacace5ae not found: ID does not exist" Feb 16 15:30:55 crc kubenswrapper[4835]: I0216 15:30:55.391704 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2a88505-6176-419f-9e3c-112096164d61" path="/var/lib/kubelet/pods/c2a88505-6176-419f-9e3c-112096164d61/volumes" Feb 16 15:31:01 crc kubenswrapper[4835]: E0216 15:31:01.391168 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:31:13 crc kubenswrapper[4835]: E0216 15:31:13.519400 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:31:13 crc kubenswrapper[4835]: E0216 15:31:13.520087 4835 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:31:13 crc kubenswrapper[4835]: E0216 15:31:13.520303 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqdtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-sgzmb_openstack(3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:31:13 crc kubenswrapper[4835]: E0216 15:31:13.521553 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:31:28 crc kubenswrapper[4835]: E0216 15:31:28.381621 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:31:33 crc kubenswrapper[4835]: I0216 15:31:33.854977 4835 scope.go:117] "RemoveContainer" containerID="6f35b66214973307f150dff8aa34629732910d43582e310773633a1c678723fc" Feb 16 15:31:33 crc kubenswrapper[4835]: I0216 15:31:33.879645 4835 scope.go:117] "RemoveContainer" containerID="1b1b90ea0e2a380fd833e123941bd651ed72f6787e386fae69d5e9a55bf8e11c" Feb 16 15:31:33 crc kubenswrapper[4835]: I0216 15:31:33.945377 4835 scope.go:117] "RemoveContainer" containerID="6e0d7a782680e0d85f93b31b0d0dcd169a5401db122807afa70d92c2f4a592e1" Feb 16 15:31:41 crc kubenswrapper[4835]: E0216 15:31:41.390612 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:31:56 crc kubenswrapper[4835]: E0216 15:31:56.381943 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:32:07 crc kubenswrapper[4835]: E0216 15:32:07.379991 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:32:18 crc kubenswrapper[4835]: E0216 15:32:18.381468 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:32:18 crc kubenswrapper[4835]: I0216 15:32:18.586767 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:32:18 crc kubenswrapper[4835]: I0216 15:32:18.587247 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:32:33 crc kubenswrapper[4835]: E0216 15:32:33.382496 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:32:34 crc kubenswrapper[4835]: I0216 15:32:34.083643 4835 scope.go:117] "RemoveContainer" containerID="9493edaf9cd7300d06cbd7664e06660377f42377d268488700ceec2cee24c7f8" Feb 16 15:32:34 crc kubenswrapper[4835]: I0216 15:32:34.125355 4835 scope.go:117] "RemoveContainer" containerID="a687e0f9b23ec6f5225aaece7449a2b155b00a467b65b6bdcd8f8fb2b736e916" Feb 16 15:32:34 crc kubenswrapper[4835]: I0216 15:32:34.171707 4835 scope.go:117] "RemoveContainer" containerID="f3f8e09d5e948b06479284ded54feb7fa3a267eea80d69de2872015b0f731cac" Feb 16 15:32:34 crc kubenswrapper[4835]: I0216 15:32:34.199863 4835 scope.go:117] "RemoveContainer" containerID="e431095ba5f0364f8b7718602fa1fd38bb6cfc13e8b652a30aabc4e602cf8ca4" Feb 16 15:32:34 crc kubenswrapper[4835]: I0216 15:32:34.232610 4835 scope.go:117] "RemoveContainer" containerID="9e2b082874c49185f54f8d2208da66f11fd430f24757f32be3226493c6db692a" Feb 16 15:32:48 crc kubenswrapper[4835]: E0216 15:32:48.381219 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:32:48 crc kubenswrapper[4835]: I0216 15:32:48.586398 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:32:48 crc kubenswrapper[4835]: I0216 15:32:48.586755 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:33:00 crc kubenswrapper[4835]: E0216 15:33:00.382249 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.081224 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5szv8"] Feb 16 15:33:13 crc kubenswrapper[4835]: E0216 15:33:13.082552 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a88505-6176-419f-9e3c-112096164d61" containerName="extract-utilities" Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.082568 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a88505-6176-419f-9e3c-112096164d61" containerName="extract-utilities" Feb 16 15:33:13 crc kubenswrapper[4835]: E0216 15:33:13.082581 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a88505-6176-419f-9e3c-112096164d61" containerName="extract-content" Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.082590 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a88505-6176-419f-9e3c-112096164d61" containerName="extract-content" Feb 16 15:33:13 crc kubenswrapper[4835]: E0216 15:33:13.082628 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a88505-6176-419f-9e3c-112096164d61" containerName="registry-server" Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.082637 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a88505-6176-419f-9e3c-112096164d61" containerName="registry-server" Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.082942 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2a88505-6176-419f-9e3c-112096164d61" containerName="registry-server" Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.084973 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5szv8" Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.100040 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5szv8"] Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.230429 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8429a090-96fb-4ea6-b884-7fc9d8dc0376-utilities\") pod \"redhat-marketplace-5szv8\" (UID: \"8429a090-96fb-4ea6-b884-7fc9d8dc0376\") " pod="openshift-marketplace/redhat-marketplace-5szv8" Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.230510 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66962\" (UniqueName: \"kubernetes.io/projected/8429a090-96fb-4ea6-b884-7fc9d8dc0376-kube-api-access-66962\") pod \"redhat-marketplace-5szv8\" (UID: \"8429a090-96fb-4ea6-b884-7fc9d8dc0376\") " pod="openshift-marketplace/redhat-marketplace-5szv8" Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.230587 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8429a090-96fb-4ea6-b884-7fc9d8dc0376-catalog-content\") pod \"redhat-marketplace-5szv8\" (UID: \"8429a090-96fb-4ea6-b884-7fc9d8dc0376\") " pod="openshift-marketplace/redhat-marketplace-5szv8" Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.332092 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8429a090-96fb-4ea6-b884-7fc9d8dc0376-utilities\") pod \"redhat-marketplace-5szv8\" (UID: \"8429a090-96fb-4ea6-b884-7fc9d8dc0376\") " pod="openshift-marketplace/redhat-marketplace-5szv8" Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.332181 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66962\" (UniqueName: \"kubernetes.io/projected/8429a090-96fb-4ea6-b884-7fc9d8dc0376-kube-api-access-66962\") pod \"redhat-marketplace-5szv8\" (UID: \"8429a090-96fb-4ea6-b884-7fc9d8dc0376\") " pod="openshift-marketplace/redhat-marketplace-5szv8" Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.332221 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8429a090-96fb-4ea6-b884-7fc9d8dc0376-catalog-content\") pod \"redhat-marketplace-5szv8\" (UID: \"8429a090-96fb-4ea6-b884-7fc9d8dc0376\") " pod="openshift-marketplace/redhat-marketplace-5szv8" Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.332644 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8429a090-96fb-4ea6-b884-7fc9d8dc0376-utilities\") pod \"redhat-marketplace-5szv8\" (UID: \"8429a090-96fb-4ea6-b884-7fc9d8dc0376\") " pod="openshift-marketplace/redhat-marketplace-5szv8" Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.332718 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8429a090-96fb-4ea6-b884-7fc9d8dc0376-catalog-content\") pod \"redhat-marketplace-5szv8\" (UID: \"8429a090-96fb-4ea6-b884-7fc9d8dc0376\") " pod="openshift-marketplace/redhat-marketplace-5szv8" Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.357129 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66962\" (UniqueName: \"kubernetes.io/projected/8429a090-96fb-4ea6-b884-7fc9d8dc0376-kube-api-access-66962\") pod \"redhat-marketplace-5szv8\" (UID: \"8429a090-96fb-4ea6-b884-7fc9d8dc0376\") " pod="openshift-marketplace/redhat-marketplace-5szv8" Feb 16 15:33:13 crc kubenswrapper[4835]: E0216 15:33:13.386372 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.459124 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5szv8" Feb 16 15:33:13 crc kubenswrapper[4835]: I0216 15:33:13.956323 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5szv8"] Feb 16 15:33:14 crc kubenswrapper[4835]: I0216 15:33:14.330885 4835 generic.go:334] "Generic (PLEG): container finished" podID="8429a090-96fb-4ea6-b884-7fc9d8dc0376" containerID="94abcdf28dc204e5e4267ed297fa4bcc6a5546c274898a3f629c7ee8ff1225c0" exitCode=0 Feb 16 15:33:14 crc kubenswrapper[4835]: I0216 15:33:14.330922 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5szv8" event={"ID":"8429a090-96fb-4ea6-b884-7fc9d8dc0376","Type":"ContainerDied","Data":"94abcdf28dc204e5e4267ed297fa4bcc6a5546c274898a3f629c7ee8ff1225c0"} Feb 16 15:33:14 crc kubenswrapper[4835]: I0216 15:33:14.330945 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5szv8" event={"ID":"8429a090-96fb-4ea6-b884-7fc9d8dc0376","Type":"ContainerStarted","Data":"c8669b310919e113d6d7d98ea39d9daf1c2b3a716c94457a677614b2a933eb09"} Feb 16 15:33:14 crc kubenswrapper[4835]: I0216 15:33:14.333597 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:33:16 crc kubenswrapper[4835]: I0216 15:33:16.353872 4835 generic.go:334] "Generic (PLEG): container finished" podID="8429a090-96fb-4ea6-b884-7fc9d8dc0376" containerID="479501261d9750c2588c469e251916c8092dde100ed8112228a96468cac3346a" exitCode=0 Feb 16 15:33:16 crc kubenswrapper[4835]: I0216 15:33:16.353938 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5szv8" event={"ID":"8429a090-96fb-4ea6-b884-7fc9d8dc0376","Type":"ContainerDied","Data":"479501261d9750c2588c469e251916c8092dde100ed8112228a96468cac3346a"} Feb 16 15:33:17 crc kubenswrapper[4835]: I0216 15:33:17.365410 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5szv8" event={"ID":"8429a090-96fb-4ea6-b884-7fc9d8dc0376","Type":"ContainerStarted","Data":"1dab9c6bc233ccaedfaddab1e45e40ef8e06da6af3a6348a5edea53d4d5d4e1c"} Feb 16 15:33:17 crc kubenswrapper[4835]: I0216 15:33:17.392760 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5szv8" podStartSLOduration=1.9635547789999999 podStartE2EDuration="4.392744382s" podCreationTimestamp="2026-02-16 15:33:13 +0000 UTC" firstStartedPulling="2026-02-16 15:33:14.333362284 +0000 UTC m=+1543.625355179" lastFinishedPulling="2026-02-16 15:33:16.762551847 +0000 UTC m=+1546.054544782" observedRunningTime="2026-02-16 15:33:17.384080685 +0000 UTC m=+1546.676073580" watchObservedRunningTime="2026-02-16 15:33:17.392744382 +0000 UTC m=+1546.684737277" Feb 16 15:33:18 crc kubenswrapper[4835]: I0216 15:33:18.586933 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:33:18 crc kubenswrapper[4835]: I0216 15:33:18.587218 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:33:18 crc kubenswrapper[4835]: I0216 15:33:18.587264 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:33:18 crc kubenswrapper[4835]: I0216 15:33:18.588006 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624"} pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:33:18 crc kubenswrapper[4835]: I0216 15:33:18.588063 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" containerID="cri-o://d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" gracePeriod=600 Feb 16 15:33:18 crc kubenswrapper[4835]: E0216 15:33:18.713798 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:33:19 crc kubenswrapper[4835]: I0216 15:33:19.391115 4835 generic.go:334] "Generic (PLEG): container finished" podID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" exitCode=0 Feb 16 15:33:19 crc kubenswrapper[4835]: I0216 15:33:19.391170 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerDied","Data":"d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624"} Feb 16 15:33:19 crc kubenswrapper[4835]: I0216 15:33:19.391489 4835 scope.go:117] "RemoveContainer" containerID="eb539ff3d97049cb7ff841e79b175fa7a23e4c2b3f278dee053bc66e237d104c" Feb 16 15:33:19 crc kubenswrapper[4835]: I0216 15:33:19.392030 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:33:19 crc kubenswrapper[4835]: E0216 15:33:19.392609 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:33:23 crc kubenswrapper[4835]: I0216 15:33:23.467209 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5szv8" Feb 16 15:33:23 crc kubenswrapper[4835]: I0216 15:33:23.468376 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5szv8" Feb 16 15:33:23 crc kubenswrapper[4835]: I0216 15:33:23.521893 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5szv8" Feb 16 15:33:24 crc kubenswrapper[4835]: I0216 15:33:24.506336 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5szv8" Feb 16 15:33:24 crc kubenswrapper[4835]: I0216 15:33:24.581761 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5szv8"] Feb 16 15:33:26 crc kubenswrapper[4835]: I0216 15:33:26.463777 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5szv8" podUID="8429a090-96fb-4ea6-b884-7fc9d8dc0376" containerName="registry-server" containerID="cri-o://1dab9c6bc233ccaedfaddab1e45e40ef8e06da6af3a6348a5edea53d4d5d4e1c" gracePeriod=2 Feb 16 15:33:26 crc kubenswrapper[4835]: I0216 15:33:26.990027 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5szv8" Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.139759 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8429a090-96fb-4ea6-b884-7fc9d8dc0376-utilities\") pod \"8429a090-96fb-4ea6-b884-7fc9d8dc0376\" (UID: \"8429a090-96fb-4ea6-b884-7fc9d8dc0376\") " Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.139875 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66962\" (UniqueName: \"kubernetes.io/projected/8429a090-96fb-4ea6-b884-7fc9d8dc0376-kube-api-access-66962\") pod \"8429a090-96fb-4ea6-b884-7fc9d8dc0376\" (UID: \"8429a090-96fb-4ea6-b884-7fc9d8dc0376\") " Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.139983 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8429a090-96fb-4ea6-b884-7fc9d8dc0376-catalog-content\") pod \"8429a090-96fb-4ea6-b884-7fc9d8dc0376\" (UID: \"8429a090-96fb-4ea6-b884-7fc9d8dc0376\") " Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.140728 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8429a090-96fb-4ea6-b884-7fc9d8dc0376-utilities" (OuterVolumeSpecName: "utilities") pod "8429a090-96fb-4ea6-b884-7fc9d8dc0376" (UID: "8429a090-96fb-4ea6-b884-7fc9d8dc0376"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.147788 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8429a090-96fb-4ea6-b884-7fc9d8dc0376-kube-api-access-66962" (OuterVolumeSpecName: "kube-api-access-66962") pod "8429a090-96fb-4ea6-b884-7fc9d8dc0376" (UID: "8429a090-96fb-4ea6-b884-7fc9d8dc0376"). InnerVolumeSpecName "kube-api-access-66962". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.171916 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8429a090-96fb-4ea6-b884-7fc9d8dc0376-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8429a090-96fb-4ea6-b884-7fc9d8dc0376" (UID: "8429a090-96fb-4ea6-b884-7fc9d8dc0376"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.242337 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66962\" (UniqueName: \"kubernetes.io/projected/8429a090-96fb-4ea6-b884-7fc9d8dc0376-kube-api-access-66962\") on node \"crc\" DevicePath \"\"" Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.242368 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8429a090-96fb-4ea6-b884-7fc9d8dc0376-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.242378 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8429a090-96fb-4ea6-b884-7fc9d8dc0376-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.485032 4835 generic.go:334] "Generic (PLEG): container finished" podID="8429a090-96fb-4ea6-b884-7fc9d8dc0376" containerID="1dab9c6bc233ccaedfaddab1e45e40ef8e06da6af3a6348a5edea53d4d5d4e1c" exitCode=0 Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.485070 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5szv8" event={"ID":"8429a090-96fb-4ea6-b884-7fc9d8dc0376","Type":"ContainerDied","Data":"1dab9c6bc233ccaedfaddab1e45e40ef8e06da6af3a6348a5edea53d4d5d4e1c"} Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.485100 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5szv8" event={"ID":"8429a090-96fb-4ea6-b884-7fc9d8dc0376","Type":"ContainerDied","Data":"c8669b310919e113d6d7d98ea39d9daf1c2b3a716c94457a677614b2a933eb09"} Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.485119 4835 scope.go:117] "RemoveContainer" containerID="1dab9c6bc233ccaedfaddab1e45e40ef8e06da6af3a6348a5edea53d4d5d4e1c" Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.487318 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5szv8" Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.514767 4835 scope.go:117] "RemoveContainer" containerID="479501261d9750c2588c469e251916c8092dde100ed8112228a96468cac3346a" Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.515431 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5szv8"] Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.529901 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5szv8"] Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.551376 4835 scope.go:117] "RemoveContainer" containerID="94abcdf28dc204e5e4267ed297fa4bcc6a5546c274898a3f629c7ee8ff1225c0" Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.578146 4835 scope.go:117] "RemoveContainer" containerID="1dab9c6bc233ccaedfaddab1e45e40ef8e06da6af3a6348a5edea53d4d5d4e1c" Feb 16 15:33:27 crc kubenswrapper[4835]: E0216 15:33:27.578572 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dab9c6bc233ccaedfaddab1e45e40ef8e06da6af3a6348a5edea53d4d5d4e1c\": container with ID starting with 1dab9c6bc233ccaedfaddab1e45e40ef8e06da6af3a6348a5edea53d4d5d4e1c not found: ID does not exist" containerID="1dab9c6bc233ccaedfaddab1e45e40ef8e06da6af3a6348a5edea53d4d5d4e1c" Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.578605 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dab9c6bc233ccaedfaddab1e45e40ef8e06da6af3a6348a5edea53d4d5d4e1c"} err="failed to get container status \"1dab9c6bc233ccaedfaddab1e45e40ef8e06da6af3a6348a5edea53d4d5d4e1c\": rpc error: code = NotFound desc = could not find container \"1dab9c6bc233ccaedfaddab1e45e40ef8e06da6af3a6348a5edea53d4d5d4e1c\": container with ID starting with 1dab9c6bc233ccaedfaddab1e45e40ef8e06da6af3a6348a5edea53d4d5d4e1c not found: ID does not exist" Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.578624 4835 scope.go:117] "RemoveContainer" containerID="479501261d9750c2588c469e251916c8092dde100ed8112228a96468cac3346a" Feb 16 15:33:27 crc kubenswrapper[4835]: E0216 15:33:27.578972 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"479501261d9750c2588c469e251916c8092dde100ed8112228a96468cac3346a\": container with ID starting with 479501261d9750c2588c469e251916c8092dde100ed8112228a96468cac3346a not found: ID does not exist" containerID="479501261d9750c2588c469e251916c8092dde100ed8112228a96468cac3346a" Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.578995 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"479501261d9750c2588c469e251916c8092dde100ed8112228a96468cac3346a"} err="failed to get container status \"479501261d9750c2588c469e251916c8092dde100ed8112228a96468cac3346a\": rpc error: code = NotFound desc = could not find container \"479501261d9750c2588c469e251916c8092dde100ed8112228a96468cac3346a\": container with ID starting with 479501261d9750c2588c469e251916c8092dde100ed8112228a96468cac3346a not found: ID does not exist" Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.579010 4835 scope.go:117] "RemoveContainer" containerID="94abcdf28dc204e5e4267ed297fa4bcc6a5546c274898a3f629c7ee8ff1225c0" Feb 16 15:33:27 crc kubenswrapper[4835]: E0216 15:33:27.579293 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94abcdf28dc204e5e4267ed297fa4bcc6a5546c274898a3f629c7ee8ff1225c0\": container with ID starting with 94abcdf28dc204e5e4267ed297fa4bcc6a5546c274898a3f629c7ee8ff1225c0 not found: ID does not exist" containerID="94abcdf28dc204e5e4267ed297fa4bcc6a5546c274898a3f629c7ee8ff1225c0" Feb 16 15:33:27 crc kubenswrapper[4835]: I0216 15:33:27.579324 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94abcdf28dc204e5e4267ed297fa4bcc6a5546c274898a3f629c7ee8ff1225c0"} err="failed to get container status \"94abcdf28dc204e5e4267ed297fa4bcc6a5546c274898a3f629c7ee8ff1225c0\": rpc error: code = NotFound desc = could not find container \"94abcdf28dc204e5e4267ed297fa4bcc6a5546c274898a3f629c7ee8ff1225c0\": container with ID starting with 94abcdf28dc204e5e4267ed297fa4bcc6a5546c274898a3f629c7ee8ff1225c0 not found: ID does not exist" Feb 16 15:33:28 crc kubenswrapper[4835]: E0216 15:33:28.381123 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:33:29 crc kubenswrapper[4835]: I0216 15:33:29.390189 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8429a090-96fb-4ea6-b884-7fc9d8dc0376" path="/var/lib/kubelet/pods/8429a090-96fb-4ea6-b884-7fc9d8dc0376/volumes" Feb 16 15:33:33 crc kubenswrapper[4835]: I0216 15:33:33.378962 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:33:33 crc kubenswrapper[4835]: E0216 15:33:33.379422 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:33:41 crc kubenswrapper[4835]: E0216 15:33:41.390122 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:33:48 crc kubenswrapper[4835]: I0216 15:33:48.379321 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:33:48 crc kubenswrapper[4835]: E0216 15:33:48.380199 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:33:54 crc kubenswrapper[4835]: E0216 15:33:54.381726 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:34:03 crc kubenswrapper[4835]: I0216 15:34:03.378485 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:34:03 crc kubenswrapper[4835]: E0216 15:34:03.379341 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:34:09 crc kubenswrapper[4835]: E0216 15:34:09.384251 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:34:17 crc kubenswrapper[4835]: I0216 15:34:17.379192 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:34:17 crc kubenswrapper[4835]: E0216 15:34:17.380017 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:34:21 crc kubenswrapper[4835]: E0216 15:34:21.386836 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:34:30 crc kubenswrapper[4835]: I0216 15:34:30.378785 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:34:30 crc kubenswrapper[4835]: E0216 15:34:30.380098 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:34:34 crc kubenswrapper[4835]: I0216 15:34:34.384878 4835 scope.go:117] "RemoveContainer" containerID="56434d808d9111c94b448ca38f358f94fe1ebf96669135267d6a8c1148933b51" Feb 16 15:34:35 crc kubenswrapper[4835]: E0216 15:34:35.380829 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:34:42 crc kubenswrapper[4835]: I0216 15:34:42.379279 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:34:42 crc kubenswrapper[4835]: E0216 15:34:42.380457 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:34:47 crc kubenswrapper[4835]: E0216 15:34:47.381975 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:34:48 crc kubenswrapper[4835]: I0216 15:34:48.059258 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-xvrzt"] Feb 16 15:34:48 crc kubenswrapper[4835]: I0216 15:34:48.069587 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-mnvcx"] Feb 16 15:34:48 crc kubenswrapper[4835]: I0216 15:34:48.080880 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-mnvcx"] Feb 16 15:34:48 crc kubenswrapper[4835]: I0216 15:34:48.091576 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-xvrzt"] Feb 16 15:34:49 crc kubenswrapper[4835]: I0216 15:34:49.055123 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-bba3-account-create-update-jb6x5"] Feb 16 15:34:49 crc kubenswrapper[4835]: I0216 15:34:49.074866 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-bba3-account-create-update-jb6x5"] Feb 16 15:34:49 crc kubenswrapper[4835]: I0216 15:34:49.084672 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-z5tpt"] Feb 16 15:34:49 crc kubenswrapper[4835]: I0216 15:34:49.093162 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1ee6-account-create-update-h7mfh"] Feb 16 15:34:49 crc kubenswrapper[4835]: I0216 15:34:49.101978 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1ee6-account-create-update-h7mfh"] Feb 16 15:34:49 crc kubenswrapper[4835]: I0216 15:34:49.110287 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-z5tpt"] Feb 16 15:34:49 crc kubenswrapper[4835]: I0216 15:34:49.118297 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c06c-account-create-update-nnptn"] Feb 16 15:34:49 crc kubenswrapper[4835]: I0216 15:34:49.127030 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-c06c-account-create-update-nnptn"] Feb 16 15:34:49 crc kubenswrapper[4835]: I0216 15:34:49.395807 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1109c80c-b07a-4be2-8002-cadfbbc7e0af" path="/var/lib/kubelet/pods/1109c80c-b07a-4be2-8002-cadfbbc7e0af/volumes" Feb 16 15:34:49 crc kubenswrapper[4835]: I0216 15:34:49.397277 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4724bb12-af57-4aba-9403-07a999cde053" path="/var/lib/kubelet/pods/4724bb12-af57-4aba-9403-07a999cde053/volumes" Feb 16 15:34:49 crc kubenswrapper[4835]: I0216 15:34:49.399505 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="578241de-b081-44dc-bab5-3ddfba91c2df" path="/var/lib/kubelet/pods/578241de-b081-44dc-bab5-3ddfba91c2df/volumes" Feb 16 15:34:49 crc kubenswrapper[4835]: I0216 15:34:49.400906 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e95777ba-e0bd-4163-a3b4-7cfc9271a946" path="/var/lib/kubelet/pods/e95777ba-e0bd-4163-a3b4-7cfc9271a946/volumes" Feb 16 15:34:49 crc kubenswrapper[4835]: I0216 15:34:49.403144 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea4a1831-012e-4c40-80ef-db237493e6ac" path="/var/lib/kubelet/pods/ea4a1831-012e-4c40-80ef-db237493e6ac/volumes" Feb 16 15:34:49 crc kubenswrapper[4835]: I0216 15:34:49.404481 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6808008-de25-4d2d-8753-945ad39d27b3" path="/var/lib/kubelet/pods/f6808008-de25-4d2d-8753-945ad39d27b3/volumes" Feb 16 15:34:55 crc kubenswrapper[4835]: I0216 15:34:55.379851 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:34:55 crc kubenswrapper[4835]: E0216 15:34:55.381022 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:35:02 crc kubenswrapper[4835]: E0216 15:35:02.380200 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:35:03 crc kubenswrapper[4835]: I0216 15:35:03.075161 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-srq6s"] Feb 16 15:35:03 crc kubenswrapper[4835]: I0216 15:35:03.098104 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-90e7-account-create-update-s5jqp"] Feb 16 15:35:03 crc kubenswrapper[4835]: I0216 15:35:03.112225 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-90e7-account-create-update-s5jqp"] Feb 16 15:35:03 crc kubenswrapper[4835]: I0216 15:35:03.122624 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-srq6s"] Feb 16 15:35:03 crc kubenswrapper[4835]: I0216 15:35:03.131361 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-85znn"] Feb 16 15:35:03 crc kubenswrapper[4835]: I0216 15:35:03.141905 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-85znn"] Feb 16 15:35:03 crc kubenswrapper[4835]: I0216 15:35:03.391671 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2afc8716-571e-4e91-8a87-61e144cb3e91" path="/var/lib/kubelet/pods/2afc8716-571e-4e91-8a87-61e144cb3e91/volumes" Feb 16 15:35:03 crc kubenswrapper[4835]: I0216 15:35:03.392288 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e896978-24a0-4f5e-bbc8-e33a887a98c0" path="/var/lib/kubelet/pods/6e896978-24a0-4f5e-bbc8-e33a887a98c0/volumes" Feb 16 15:35:03 crc kubenswrapper[4835]: I0216 15:35:03.392933 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d" path="/var/lib/kubelet/pods/f41fedbd-58d0-4562-bd64-e9b1f1cfeb6d/volumes" Feb 16 15:35:08 crc kubenswrapper[4835]: I0216 15:35:08.379300 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:35:08 crc kubenswrapper[4835]: E0216 15:35:08.380165 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:35:10 crc kubenswrapper[4835]: I0216 15:35:10.030807 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-dtg9h"] Feb 16 15:35:10 crc kubenswrapper[4835]: I0216 15:35:10.040491 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-dtg9h"] Feb 16 15:35:11 crc kubenswrapper[4835]: I0216 15:35:11.393112 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43d210b1-a527-417c-a74b-e3363616d04b" path="/var/lib/kubelet/pods/43d210b1-a527-417c-a74b-e3363616d04b/volumes" Feb 16 15:35:13 crc kubenswrapper[4835]: I0216 15:35:13.032486 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qkfw5"] Feb 16 15:35:13 crc kubenswrapper[4835]: I0216 15:35:13.044850 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2711-account-create-update-97ktr"] Feb 16 15:35:13 crc kubenswrapper[4835]: I0216 15:35:13.055027 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-bm5j9"] Feb 16 15:35:13 crc kubenswrapper[4835]: I0216 15:35:13.066740 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0b76-account-create-update-m2jgq"] Feb 16 15:35:13 crc kubenswrapper[4835]: I0216 15:35:13.075736 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2711-account-create-update-97ktr"] Feb 16 15:35:13 crc kubenswrapper[4835]: I0216 15:35:13.084258 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qkfw5"] Feb 16 15:35:13 crc kubenswrapper[4835]: I0216 15:35:13.093311 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0b76-account-create-update-m2jgq"] Feb 16 15:35:13 crc kubenswrapper[4835]: I0216 15:35:13.103277 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-bm5j9"] Feb 16 15:35:13 crc kubenswrapper[4835]: I0216 15:35:13.112721 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-ec07-account-create-update-mllh2"] Feb 16 15:35:13 crc kubenswrapper[4835]: I0216 15:35:13.121832 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-ec07-account-create-update-mllh2"] Feb 16 15:35:13 crc kubenswrapper[4835]: I0216 15:35:13.392204 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="719b9570-6729-4015-bbaa-0865b16b86d6" path="/var/lib/kubelet/pods/719b9570-6729-4015-bbaa-0865b16b86d6/volumes" Feb 16 15:35:13 crc kubenswrapper[4835]: I0216 15:35:13.392828 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dac99ad-85aa-4faf-b55d-224a04e2b659" path="/var/lib/kubelet/pods/7dac99ad-85aa-4faf-b55d-224a04e2b659/volumes" Feb 16 15:35:13 crc kubenswrapper[4835]: I0216 15:35:13.393372 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ee22ae0-4c3f-4a60-ad86-7f909b157b6b" path="/var/lib/kubelet/pods/7ee22ae0-4c3f-4a60-ad86-7f909b157b6b/volumes" Feb 16 15:35:13 crc kubenswrapper[4835]: I0216 15:35:13.393994 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af17e5a7-3351-4d2c-9214-fc8221a15fe9" path="/var/lib/kubelet/pods/af17e5a7-3351-4d2c-9214-fc8221a15fe9/volumes" Feb 16 15:35:13 crc kubenswrapper[4835]: I0216 15:35:13.395065 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc" path="/var/lib/kubelet/pods/d4ed45f6-6a4d-4e8e-ab80-2bf365b5fbbc/volumes" Feb 16 15:35:14 crc kubenswrapper[4835]: E0216 15:35:14.381501 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:35:21 crc kubenswrapper[4835]: I0216 15:35:21.050811 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-chltq"] Feb 16 15:35:21 crc kubenswrapper[4835]: I0216 15:35:21.062710 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-chltq"] Feb 16 15:35:21 crc kubenswrapper[4835]: I0216 15:35:21.393503 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12db908e-5604-4c20-bfa8-ee01f8bac719" path="/var/lib/kubelet/pods/12db908e-5604-4c20-bfa8-ee01f8bac719/volumes" Feb 16 15:35:23 crc kubenswrapper[4835]: I0216 15:35:23.378649 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:35:23 crc kubenswrapper[4835]: E0216 15:35:23.379286 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:35:26 crc kubenswrapper[4835]: E0216 15:35:26.381162 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:35:33 crc kubenswrapper[4835]: I0216 15:35:33.062063 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-9gx68"] Feb 16 15:35:33 crc kubenswrapper[4835]: I0216 15:35:33.078052 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-9gx68"] Feb 16 15:35:33 crc kubenswrapper[4835]: I0216 15:35:33.394150 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99ac121e-3070-48a4-94df-938421346b96" path="/var/lib/kubelet/pods/99ac121e-3070-48a4-94df-938421346b96/volumes" Feb 16 15:35:34 crc kubenswrapper[4835]: I0216 15:35:34.463341 4835 scope.go:117] "RemoveContainer" containerID="0254adea66119980500110bf5476b48de449be7a5fd5ccd66670e15f41c0d8da" Feb 16 15:35:34 crc kubenswrapper[4835]: I0216 15:35:34.503303 4835 scope.go:117] "RemoveContainer" containerID="1b465b241de862723ce7f3bfd761cd47da79889f07ddc8886b34ef2a1064e885" Feb 16 15:35:34 crc kubenswrapper[4835]: I0216 15:35:34.578996 4835 scope.go:117] "RemoveContainer" containerID="1f52696865448b33775d5bb0cd992b560e790bc769264ff332657abeffc58e3e" Feb 16 15:35:34 crc kubenswrapper[4835]: I0216 15:35:34.636584 4835 scope.go:117] "RemoveContainer" containerID="380fedb0b1f9685f32232358d41c8717f51db16139290b11d055a794b9ef69bf" Feb 16 15:35:34 crc kubenswrapper[4835]: I0216 15:35:34.685022 4835 scope.go:117] "RemoveContainer" containerID="99212ac33b423a848f75337c7d41067df1ea00a35a71f61b28557e9648f0765a" Feb 16 15:35:34 crc kubenswrapper[4835]: I0216 15:35:34.727062 4835 scope.go:117] "RemoveContainer" containerID="5bbbc79bd8c474bc62ee085349da0604456d84cfb63fb7d0e31661454fa854a5" Feb 16 15:35:34 crc kubenswrapper[4835]: I0216 15:35:34.783120 4835 scope.go:117] "RemoveContainer" containerID="c7dc7e3fbb1fd3e16e1606b6e6cdb2afb1325e083941874321988bc47b0e6233" Feb 16 15:35:34 crc kubenswrapper[4835]: I0216 15:35:34.817917 4835 scope.go:117] "RemoveContainer" containerID="282a2bfc39068383fa47e81ecc2dd3123f81857610b5bb328a7b21207e34ec53" Feb 16 15:35:34 crc kubenswrapper[4835]: I0216 15:35:34.848721 4835 scope.go:117] "RemoveContainer" containerID="e020718c964ec483f0caad7938898c6e69a1ecae07b4415440f14dd72065c661" Feb 16 15:35:34 crc kubenswrapper[4835]: I0216 15:35:34.892317 4835 scope.go:117] "RemoveContainer" containerID="84b81b2535b7a5b5a6c602923069cf26d31a5f30de8a4978b0dfaa6294cddfeb" Feb 16 15:35:34 crc kubenswrapper[4835]: I0216 15:35:34.915585 4835 scope.go:117] "RemoveContainer" containerID="1b1268a9a545c4aa86c1fd07bb99fab9493d57738a823acd6d45606f02c6434d" Feb 16 15:35:34 crc kubenswrapper[4835]: I0216 15:35:34.934967 4835 scope.go:117] "RemoveContainer" containerID="94f39ec6499a3160833fb194cf31bf253ecfc1329210bd08638866af6476c0be" Feb 16 15:35:34 crc kubenswrapper[4835]: I0216 15:35:34.953786 4835 scope.go:117] "RemoveContainer" containerID="eb2f498c27e61160d5de9931604f1273d6cca14d697ee0f28d6c2d6a42bc7146" Feb 16 15:35:34 crc kubenswrapper[4835]: I0216 15:35:34.976022 4835 scope.go:117] "RemoveContainer" containerID="bc9048a113e4b19e6c72db3a50337c1145168cff317a85fc85129ffce3837462" Feb 16 15:35:35 crc kubenswrapper[4835]: I0216 15:35:35.000156 4835 scope.go:117] "RemoveContainer" containerID="dad597ac6a40611bf47d7a079dca5cd3d946c265017395986fc2cfc357ed8c5d" Feb 16 15:35:35 crc kubenswrapper[4835]: I0216 15:35:35.020511 4835 scope.go:117] "RemoveContainer" containerID="b6ad8c630aa31e5cde2c9cacc1c22d78625cb36cd838c0dfefbc12fbed84e951" Feb 16 15:35:35 crc kubenswrapper[4835]: I0216 15:35:35.048024 4835 scope.go:117] "RemoveContainer" containerID="9edd71ad76f0980e335e1a267c67fef8470920e4d4757633d84f7614c90b4194" Feb 16 15:35:37 crc kubenswrapper[4835]: E0216 15:35:37.380989 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:35:38 crc kubenswrapper[4835]: I0216 15:35:38.380058 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:35:38 crc kubenswrapper[4835]: E0216 15:35:38.380670 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:35:52 crc kubenswrapper[4835]: I0216 15:35:52.379663 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:35:52 crc kubenswrapper[4835]: E0216 15:35:52.380414 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:35:52 crc kubenswrapper[4835]: E0216 15:35:52.382277 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:35:56 crc kubenswrapper[4835]: I0216 15:35:56.029030 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-lkz2b"] Feb 16 15:35:56 crc kubenswrapper[4835]: I0216 15:35:56.037464 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-lkz2b"] Feb 16 15:35:57 crc kubenswrapper[4835]: I0216 15:35:57.396759 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b6dd766-e3c2-4559-920f-b39e5fde5526" path="/var/lib/kubelet/pods/7b6dd766-e3c2-4559-920f-b39e5fde5526/volumes" Feb 16 15:36:00 crc kubenswrapper[4835]: I0216 15:36:00.047798 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8fld2"] Feb 16 15:36:00 crc kubenswrapper[4835]: I0216 15:36:00.060441 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8fld2"] Feb 16 15:36:01 crc kubenswrapper[4835]: I0216 15:36:01.392515 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ca5b397-cd76-4bc1-9552-caecb0f37375" path="/var/lib/kubelet/pods/7ca5b397-cd76-4bc1-9552-caecb0f37375/volumes" Feb 16 15:36:06 crc kubenswrapper[4835]: E0216 15:36:06.382642 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:36:07 crc kubenswrapper[4835]: I0216 15:36:07.048491 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2pdkm"] Feb 16 15:36:07 crc kubenswrapper[4835]: I0216 15:36:07.058383 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2pdkm"] Feb 16 15:36:07 crc kubenswrapper[4835]: I0216 15:36:07.379182 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:36:07 crc kubenswrapper[4835]: E0216 15:36:07.379498 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:36:07 crc kubenswrapper[4835]: I0216 15:36:07.394997 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3" path="/var/lib/kubelet/pods/1c74ec26-1a6a-4e4d-be0f-b53bfd4864e3/volumes" Feb 16 15:36:15 crc kubenswrapper[4835]: I0216 15:36:15.040838 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-7zndp"] Feb 16 15:36:15 crc kubenswrapper[4835]: I0216 15:36:15.059259 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-7zndp"] Feb 16 15:36:15 crc kubenswrapper[4835]: I0216 15:36:15.395433 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8d68cbc-724d-490f-ae49-654aac2eb8ba" path="/var/lib/kubelet/pods/f8d68cbc-724d-490f-ae49-654aac2eb8ba/volumes" Feb 16 15:36:16 crc kubenswrapper[4835]: I0216 15:36:16.043978 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-bmq2c"] Feb 16 15:36:16 crc kubenswrapper[4835]: I0216 15:36:16.059520 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-bmq2c"] Feb 16 15:36:17 crc kubenswrapper[4835]: I0216 15:36:17.405514 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb4f111-43c6-46d3-aa98-82d93b71b723" path="/var/lib/kubelet/pods/eeb4f111-43c6-46d3-aa98-82d93b71b723/volumes" Feb 16 15:36:18 crc kubenswrapper[4835]: I0216 15:36:18.378908 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:36:18 crc kubenswrapper[4835]: E0216 15:36:18.379261 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:36:19 crc kubenswrapper[4835]: E0216 15:36:19.527312 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:36:19 crc kubenswrapper[4835]: E0216 15:36:19.527375 4835 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:36:19 crc kubenswrapper[4835]: E0216 15:36:19.527519 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqdtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-sgzmb_openstack(3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:36:19 crc kubenswrapper[4835]: E0216 15:36:19.529760 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:36:31 crc kubenswrapper[4835]: I0216 15:36:31.388170 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:36:31 crc kubenswrapper[4835]: E0216 15:36:31.389702 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:36:31 crc kubenswrapper[4835]: E0216 15:36:31.392257 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:36:35 crc kubenswrapper[4835]: I0216 15:36:35.369567 4835 scope.go:117] "RemoveContainer" containerID="fdf765a0d652d2984d1dd6dd0862d2c85d158d3ee6ef9c9cea8567b474179b9e" Feb 16 15:36:35 crc kubenswrapper[4835]: I0216 15:36:35.425167 4835 scope.go:117] "RemoveContainer" containerID="2e64983c2028c67df9406fd037236b4fe861334b06649e63b31b9656798f1ed1" Feb 16 15:36:35 crc kubenswrapper[4835]: I0216 15:36:35.455072 4835 scope.go:117] "RemoveContainer" containerID="7b104e3ee56b28e6b18dd0aa24c1253ab3520c201cfeaf4c47e730f6b37bdd03" Feb 16 15:36:35 crc kubenswrapper[4835]: I0216 15:36:35.513594 4835 scope.go:117] "RemoveContainer" containerID="c414f12ec2c165eef4869643300f36c8d109643185c54b3c088b39babfc857e5" Feb 16 15:36:35 crc kubenswrapper[4835]: I0216 15:36:35.554640 4835 scope.go:117] "RemoveContainer" containerID="fc84930b937ca4753ccd161c123df25f946e19625ce483e5f921c5ef27c4e41f" Feb 16 15:36:44 crc kubenswrapper[4835]: I0216 15:36:44.379342 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:36:44 crc kubenswrapper[4835]: E0216 15:36:44.380301 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:36:45 crc kubenswrapper[4835]: E0216 15:36:45.382266 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:36:55 crc kubenswrapper[4835]: I0216 15:36:55.378926 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:36:55 crc kubenswrapper[4835]: E0216 15:36:55.379672 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:36:56 crc kubenswrapper[4835]: E0216 15:36:56.381041 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:37:00 crc kubenswrapper[4835]: I0216 15:37:00.068776 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-l5r6k"] Feb 16 15:37:00 crc kubenswrapper[4835]: I0216 15:37:00.087115 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7e91-account-create-update-rv2sg"] Feb 16 15:37:00 crc kubenswrapper[4835]: I0216 15:37:00.097228 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7e91-account-create-update-rv2sg"] Feb 16 15:37:00 crc kubenswrapper[4835]: I0216 15:37:00.107102 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-l5r6k"] Feb 16 15:37:01 crc kubenswrapper[4835]: I0216 15:37:01.053665 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cbd6-account-create-update-wl4lp"] Feb 16 15:37:01 crc kubenswrapper[4835]: I0216 15:37:01.065072 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ca02-account-create-update-cwxqj"] Feb 16 15:37:01 crc kubenswrapper[4835]: I0216 15:37:01.088275 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-qgglt"] Feb 16 15:37:01 crc kubenswrapper[4835]: I0216 15:37:01.102761 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-9xl7p"] Feb 16 15:37:01 crc kubenswrapper[4835]: I0216 15:37:01.111861 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cbd6-account-create-update-wl4lp"] Feb 16 15:37:01 crc kubenswrapper[4835]: I0216 15:37:01.121720 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-qgglt"] Feb 16 15:37:01 crc kubenswrapper[4835]: I0216 15:37:01.133640 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ca02-account-create-update-cwxqj"] Feb 16 15:37:01 crc kubenswrapper[4835]: I0216 15:37:01.144039 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-9xl7p"] Feb 16 15:37:01 crc kubenswrapper[4835]: I0216 15:37:01.398642 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c74289-acd3-4046-9eb5-e8668093107a" path="/var/lib/kubelet/pods/02c74289-acd3-4046-9eb5-e8668093107a/volumes" Feb 16 15:37:01 crc kubenswrapper[4835]: I0216 15:37:01.400109 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48670262-e3bb-41f9-9c4e-cb1ee3608961" path="/var/lib/kubelet/pods/48670262-e3bb-41f9-9c4e-cb1ee3608961/volumes" Feb 16 15:37:01 crc kubenswrapper[4835]: I0216 15:37:01.401407 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="488a0795-d2d4-4eb0-899a-305faff595d5" path="/var/lib/kubelet/pods/488a0795-d2d4-4eb0-899a-305faff595d5/volumes" Feb 16 15:37:01 crc kubenswrapper[4835]: I0216 15:37:01.402679 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b8f4549-89bc-42f8-9c99-64f495486dc9" path="/var/lib/kubelet/pods/4b8f4549-89bc-42f8-9c99-64f495486dc9/volumes" Feb 16 15:37:01 crc kubenswrapper[4835]: I0216 15:37:01.404653 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64b7782b-02cd-48f4-9955-a3e2a698e687" path="/var/lib/kubelet/pods/64b7782b-02cd-48f4-9955-a3e2a698e687/volumes" Feb 16 15:37:01 crc kubenswrapper[4835]: I0216 15:37:01.405837 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7" path="/var/lib/kubelet/pods/b8c0750a-3fee-44f7-bb3a-ce4bc7ee64c7/volumes" Feb 16 15:37:07 crc kubenswrapper[4835]: I0216 15:37:07.379214 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:37:07 crc kubenswrapper[4835]: E0216 15:37:07.380597 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:37:11 crc kubenswrapper[4835]: E0216 15:37:11.388666 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:37:19 crc kubenswrapper[4835]: I0216 15:37:19.380044 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:37:19 crc kubenswrapper[4835]: E0216 15:37:19.381317 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:37:22 crc kubenswrapper[4835]: E0216 15:37:22.380607 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:37:26 crc kubenswrapper[4835]: I0216 15:37:26.042595 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fbln5"] Feb 16 15:37:26 crc kubenswrapper[4835]: I0216 15:37:26.057435 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fbln5"] Feb 16 15:37:27 crc kubenswrapper[4835]: I0216 15:37:27.389355 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df55a8b2-fe66-43ed-9afe-f3c3b6316a51" path="/var/lib/kubelet/pods/df55a8b2-fe66-43ed-9afe-f3c3b6316a51/volumes" Feb 16 15:37:31 crc kubenswrapper[4835]: I0216 15:37:31.384713 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:37:31 crc kubenswrapper[4835]: E0216 15:37:31.385397 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:37:33 crc kubenswrapper[4835]: E0216 15:37:33.381647 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:37:35 crc kubenswrapper[4835]: I0216 15:37:35.696389 4835 scope.go:117] "RemoveContainer" containerID="669cc378df79b2185e560475a86717f1b1d1d1c3ab542511d5d496c2984eee49" Feb 16 15:37:35 crc kubenswrapper[4835]: I0216 15:37:35.756680 4835 scope.go:117] "RemoveContainer" containerID="d9e7d4bbd2543f0d028b8b255a36451abb6ca96905fb28f9324e7ca0dcd88190" Feb 16 15:37:35 crc kubenswrapper[4835]: I0216 15:37:35.803754 4835 scope.go:117] "RemoveContainer" containerID="18f3e3d8ecbeba5e03c1133749af2644fe3f10f336ca124414a9a556bcd88109" Feb 16 15:37:35 crc kubenswrapper[4835]: I0216 15:37:35.853872 4835 scope.go:117] "RemoveContainer" containerID="e1272809359d5bc1d4d558973bdf2b6e9666c62e80f04be8847d592eaad7584a" Feb 16 15:37:35 crc kubenswrapper[4835]: I0216 15:37:35.892324 4835 scope.go:117] "RemoveContainer" containerID="a45a9aeaa81dd6d522a12875cc64c9439197ed7ec5abae0ff1c540c0663eba3a" Feb 16 15:37:35 crc kubenswrapper[4835]: I0216 15:37:35.934920 4835 scope.go:117] "RemoveContainer" containerID="fa3678f5e48fe8de13d6b5690d8261b543d74d12fc1e30aec0d48911b918c832" Feb 16 15:37:35 crc kubenswrapper[4835]: I0216 15:37:35.975826 4835 scope.go:117] "RemoveContainer" containerID="d703a5dac013d69f616856ad02ed2a18664bc6fe5bfd7c50b1cd6c9e7c56df43" Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.678286 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bf6bf"] Feb 16 15:37:45 crc kubenswrapper[4835]: E0216 15:37:45.679396 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8429a090-96fb-4ea6-b884-7fc9d8dc0376" containerName="extract-utilities" Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.679414 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8429a090-96fb-4ea6-b884-7fc9d8dc0376" containerName="extract-utilities" Feb 16 15:37:45 crc kubenswrapper[4835]: E0216 15:37:45.679439 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8429a090-96fb-4ea6-b884-7fc9d8dc0376" containerName="extract-content" Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.679447 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8429a090-96fb-4ea6-b884-7fc9d8dc0376" containerName="extract-content" Feb 16 15:37:45 crc kubenswrapper[4835]: E0216 15:37:45.679467 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8429a090-96fb-4ea6-b884-7fc9d8dc0376" containerName="registry-server" Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.679476 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8429a090-96fb-4ea6-b884-7fc9d8dc0376" containerName="registry-server" Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.679779 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8429a090-96fb-4ea6-b884-7fc9d8dc0376" containerName="registry-server" Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.681865 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bf6bf" Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.702656 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bf6bf"] Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.734516 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-utilities\") pod \"certified-operators-bf6bf\" (UID: \"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea\") " pod="openshift-marketplace/certified-operators-bf6bf" Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.734600 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6bgv\" (UniqueName: \"kubernetes.io/projected/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-kube-api-access-b6bgv\") pod \"certified-operators-bf6bf\" (UID: \"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea\") " pod="openshift-marketplace/certified-operators-bf6bf" Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.734688 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-catalog-content\") pod \"certified-operators-bf6bf\" (UID: \"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea\") " pod="openshift-marketplace/certified-operators-bf6bf" Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.836740 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6bgv\" (UniqueName: \"kubernetes.io/projected/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-kube-api-access-b6bgv\") pod \"certified-operators-bf6bf\" (UID: \"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea\") " pod="openshift-marketplace/certified-operators-bf6bf" Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.836851 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-catalog-content\") pod \"certified-operators-bf6bf\" (UID: \"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea\") " pod="openshift-marketplace/certified-operators-bf6bf" Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.836964 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-utilities\") pod \"certified-operators-bf6bf\" (UID: \"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea\") " pod="openshift-marketplace/certified-operators-bf6bf" Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.837784 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-catalog-content\") pod \"certified-operators-bf6bf\" (UID: \"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea\") " pod="openshift-marketplace/certified-operators-bf6bf" Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.837814 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-utilities\") pod \"certified-operators-bf6bf\" (UID: \"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea\") " pod="openshift-marketplace/certified-operators-bf6bf" Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.856700 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6bgv\" (UniqueName: \"kubernetes.io/projected/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-kube-api-access-b6bgv\") pod \"certified-operators-bf6bf\" (UID: \"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea\") " pod="openshift-marketplace/certified-operators-bf6bf" Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.873106 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jdpfv"] Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.875575 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdpfv" Feb 16 15:37:45 crc kubenswrapper[4835]: I0216 15:37:45.895308 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jdpfv"] Feb 16 15:37:46 crc kubenswrapper[4835]: I0216 15:37:46.009713 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bf6bf" Feb 16 15:37:46 crc kubenswrapper[4835]: I0216 15:37:46.051169 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7znw\" (UniqueName: \"kubernetes.io/projected/ba58bab8-1392-4c38-a0e1-c69a21eccb81-kube-api-access-q7znw\") pod \"community-operators-jdpfv\" (UID: \"ba58bab8-1392-4c38-a0e1-c69a21eccb81\") " pod="openshift-marketplace/community-operators-jdpfv" Feb 16 15:37:46 crc kubenswrapper[4835]: I0216 15:37:46.051350 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba58bab8-1392-4c38-a0e1-c69a21eccb81-catalog-content\") pod \"community-operators-jdpfv\" (UID: \"ba58bab8-1392-4c38-a0e1-c69a21eccb81\") " pod="openshift-marketplace/community-operators-jdpfv" Feb 16 15:37:46 crc kubenswrapper[4835]: I0216 15:37:46.051397 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba58bab8-1392-4c38-a0e1-c69a21eccb81-utilities\") pod \"community-operators-jdpfv\" (UID: \"ba58bab8-1392-4c38-a0e1-c69a21eccb81\") " pod="openshift-marketplace/community-operators-jdpfv" Feb 16 15:37:46 crc kubenswrapper[4835]: I0216 15:37:46.152764 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba58bab8-1392-4c38-a0e1-c69a21eccb81-catalog-content\") pod \"community-operators-jdpfv\" (UID: \"ba58bab8-1392-4c38-a0e1-c69a21eccb81\") " pod="openshift-marketplace/community-operators-jdpfv" Feb 16 15:37:46 crc kubenswrapper[4835]: I0216 15:37:46.153104 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba58bab8-1392-4c38-a0e1-c69a21eccb81-utilities\") pod \"community-operators-jdpfv\" (UID: \"ba58bab8-1392-4c38-a0e1-c69a21eccb81\") " pod="openshift-marketplace/community-operators-jdpfv" Feb 16 15:37:46 crc kubenswrapper[4835]: I0216 15:37:46.153176 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7znw\" (UniqueName: \"kubernetes.io/projected/ba58bab8-1392-4c38-a0e1-c69a21eccb81-kube-api-access-q7znw\") pod \"community-operators-jdpfv\" (UID: \"ba58bab8-1392-4c38-a0e1-c69a21eccb81\") " pod="openshift-marketplace/community-operators-jdpfv" Feb 16 15:37:46 crc kubenswrapper[4835]: I0216 15:37:46.154014 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba58bab8-1392-4c38-a0e1-c69a21eccb81-catalog-content\") pod \"community-operators-jdpfv\" (UID: \"ba58bab8-1392-4c38-a0e1-c69a21eccb81\") " pod="openshift-marketplace/community-operators-jdpfv" Feb 16 15:37:46 crc kubenswrapper[4835]: I0216 15:37:46.154234 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba58bab8-1392-4c38-a0e1-c69a21eccb81-utilities\") pod \"community-operators-jdpfv\" (UID: \"ba58bab8-1392-4c38-a0e1-c69a21eccb81\") " pod="openshift-marketplace/community-operators-jdpfv" Feb 16 15:37:46 crc kubenswrapper[4835]: I0216 15:37:46.177662 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7znw\" (UniqueName: \"kubernetes.io/projected/ba58bab8-1392-4c38-a0e1-c69a21eccb81-kube-api-access-q7znw\") pod \"community-operators-jdpfv\" (UID: \"ba58bab8-1392-4c38-a0e1-c69a21eccb81\") " pod="openshift-marketplace/community-operators-jdpfv" Feb 16 15:37:46 crc kubenswrapper[4835]: I0216 15:37:46.266131 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdpfv" Feb 16 15:37:46 crc kubenswrapper[4835]: I0216 15:37:46.380225 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:37:46 crc kubenswrapper[4835]: E0216 15:37:46.380595 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:37:46 crc kubenswrapper[4835]: E0216 15:37:46.388027 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:37:46 crc kubenswrapper[4835]: I0216 15:37:46.560439 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bf6bf"] Feb 16 15:37:46 crc kubenswrapper[4835]: I0216 15:37:46.849944 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jdpfv"] Feb 16 15:37:46 crc kubenswrapper[4835]: W0216 15:37:46.883426 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba58bab8_1392_4c38_a0e1_c69a21eccb81.slice/crio-5ab2e887b98d55aff9b7d6c3e75c101e3736fb4258a0ee75aa5c4148c6571c1e WatchSource:0}: Error finding container 5ab2e887b98d55aff9b7d6c3e75c101e3736fb4258a0ee75aa5c4148c6571c1e: Status 404 returned error can't find the container with id 5ab2e887b98d55aff9b7d6c3e75c101e3736fb4258a0ee75aa5c4148c6571c1e Feb 16 15:37:47 crc kubenswrapper[4835]: I0216 15:37:47.543421 4835 generic.go:334] "Generic (PLEG): container finished" podID="f151bbe5-e19f-40dc-a7f2-f4502d75f3ea" containerID="64cacfd49438a56273079767cff8f59c62628900c2c5e9594fdcd9229595ca71" exitCode=0 Feb 16 15:37:47 crc kubenswrapper[4835]: I0216 15:37:47.543483 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf6bf" event={"ID":"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea","Type":"ContainerDied","Data":"64cacfd49438a56273079767cff8f59c62628900c2c5e9594fdcd9229595ca71"} Feb 16 15:37:47 crc kubenswrapper[4835]: I0216 15:37:47.543852 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf6bf" event={"ID":"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea","Type":"ContainerStarted","Data":"c6fff994bfe58c9a59da6c0b1a159b5b6ef4e962fe421e6cc4cd9f82c4f14349"} Feb 16 15:37:47 crc kubenswrapper[4835]: I0216 15:37:47.546262 4835 generic.go:334] "Generic (PLEG): container finished" podID="ba58bab8-1392-4c38-a0e1-c69a21eccb81" containerID="6e9dbe2752edcc8e2a209b962eabcbf254f12cc8dc51578eec2f40fefb4b8a20" exitCode=0 Feb 16 15:37:47 crc kubenswrapper[4835]: I0216 15:37:47.546293 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdpfv" event={"ID":"ba58bab8-1392-4c38-a0e1-c69a21eccb81","Type":"ContainerDied","Data":"6e9dbe2752edcc8e2a209b962eabcbf254f12cc8dc51578eec2f40fefb4b8a20"} Feb 16 15:37:47 crc kubenswrapper[4835]: I0216 15:37:47.546312 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdpfv" event={"ID":"ba58bab8-1392-4c38-a0e1-c69a21eccb81","Type":"ContainerStarted","Data":"5ab2e887b98d55aff9b7d6c3e75c101e3736fb4258a0ee75aa5c4148c6571c1e"} Feb 16 15:37:48 crc kubenswrapper[4835]: I0216 15:37:48.053102 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-xt45f"] Feb 16 15:37:48 crc kubenswrapper[4835]: I0216 15:37:48.061686 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xj8wc"] Feb 16 15:37:48 crc kubenswrapper[4835]: I0216 15:37:48.071762 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-xt45f"] Feb 16 15:37:48 crc kubenswrapper[4835]: I0216 15:37:48.080164 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xj8wc"] Feb 16 15:37:48 crc kubenswrapper[4835]: I0216 15:37:48.556265 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf6bf" event={"ID":"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea","Type":"ContainerStarted","Data":"f8c3e3162c3701fe0415e88e8bf2937e6deace2ac3f0be7a2b9b60bbe6dc5856"} Feb 16 15:37:49 crc kubenswrapper[4835]: I0216 15:37:49.393508 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ab2659-cc62-4f55-981e-2f2887d7fbe1" path="/var/lib/kubelet/pods/81ab2659-cc62-4f55-981e-2f2887d7fbe1/volumes" Feb 16 15:37:49 crc kubenswrapper[4835]: I0216 15:37:49.394328 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d67a4473-b44e-41b6-b975-25f3f4f34ad8" path="/var/lib/kubelet/pods/d67a4473-b44e-41b6-b975-25f3f4f34ad8/volumes" Feb 16 15:37:49 crc kubenswrapper[4835]: I0216 15:37:49.568089 4835 generic.go:334] "Generic (PLEG): container finished" podID="f151bbe5-e19f-40dc-a7f2-f4502d75f3ea" containerID="f8c3e3162c3701fe0415e88e8bf2937e6deace2ac3f0be7a2b9b60bbe6dc5856" exitCode=0 Feb 16 15:37:49 crc kubenswrapper[4835]: I0216 15:37:49.568167 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf6bf" event={"ID":"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea","Type":"ContainerDied","Data":"f8c3e3162c3701fe0415e88e8bf2937e6deace2ac3f0be7a2b9b60bbe6dc5856"} Feb 16 15:37:49 crc kubenswrapper[4835]: I0216 15:37:49.572684 4835 generic.go:334] "Generic (PLEG): container finished" podID="ba58bab8-1392-4c38-a0e1-c69a21eccb81" containerID="7d3e8875ffb7945ebd6caead531b44a8685e565a092fc0524be0ca4f87175e87" exitCode=0 Feb 16 15:37:49 crc kubenswrapper[4835]: I0216 15:37:49.572707 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdpfv" event={"ID":"ba58bab8-1392-4c38-a0e1-c69a21eccb81","Type":"ContainerDied","Data":"7d3e8875ffb7945ebd6caead531b44a8685e565a092fc0524be0ca4f87175e87"} Feb 16 15:37:50 crc kubenswrapper[4835]: I0216 15:37:50.584871 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf6bf" event={"ID":"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea","Type":"ContainerStarted","Data":"623fee07d372f134738c7872c07584edd86630fbc84dc53959034d63d1d21bc4"} Feb 16 15:37:50 crc kubenswrapper[4835]: I0216 15:37:50.589462 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdpfv" event={"ID":"ba58bab8-1392-4c38-a0e1-c69a21eccb81","Type":"ContainerStarted","Data":"aa986276e3e00d0aaf4fc00732e393c59b3f3454d4403655791574fa5d0ac895"} Feb 16 15:37:50 crc kubenswrapper[4835]: I0216 15:37:50.605674 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bf6bf" podStartSLOduration=2.948054139 podStartE2EDuration="5.605656779s" podCreationTimestamp="2026-02-16 15:37:45 +0000 UTC" firstStartedPulling="2026-02-16 15:37:47.548052851 +0000 UTC m=+1816.840045756" lastFinishedPulling="2026-02-16 15:37:50.205655501 +0000 UTC m=+1819.497648396" observedRunningTime="2026-02-16 15:37:50.602321132 +0000 UTC m=+1819.894314027" watchObservedRunningTime="2026-02-16 15:37:50.605656779 +0000 UTC m=+1819.897649674" Feb 16 15:37:50 crc kubenswrapper[4835]: I0216 15:37:50.631810 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jdpfv" podStartSLOduration=3.220531291 podStartE2EDuration="5.631789255s" podCreationTimestamp="2026-02-16 15:37:45 +0000 UTC" firstStartedPulling="2026-02-16 15:37:47.548556184 +0000 UTC m=+1816.840549089" lastFinishedPulling="2026-02-16 15:37:49.959814158 +0000 UTC m=+1819.251807053" observedRunningTime="2026-02-16 15:37:50.622358598 +0000 UTC m=+1819.914351513" watchObservedRunningTime="2026-02-16 15:37:50.631789255 +0000 UTC m=+1819.923782160" Feb 16 15:37:56 crc kubenswrapper[4835]: I0216 15:37:56.010464 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bf6bf" Feb 16 15:37:56 crc kubenswrapper[4835]: I0216 15:37:56.011079 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bf6bf" Feb 16 15:37:56 crc kubenswrapper[4835]: I0216 15:37:56.069339 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bf6bf" Feb 16 15:37:56 crc kubenswrapper[4835]: I0216 15:37:56.266777 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jdpfv" Feb 16 15:37:56 crc kubenswrapper[4835]: I0216 15:37:56.266833 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jdpfv" Feb 16 15:37:56 crc kubenswrapper[4835]: I0216 15:37:56.316179 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jdpfv" Feb 16 15:37:56 crc kubenswrapper[4835]: I0216 15:37:56.738511 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bf6bf" Feb 16 15:37:56 crc kubenswrapper[4835]: I0216 15:37:56.739121 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jdpfv" Feb 16 15:37:57 crc kubenswrapper[4835]: I0216 15:37:57.710776 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bf6bf"] Feb 16 15:37:58 crc kubenswrapper[4835]: I0216 15:37:58.378723 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:37:58 crc kubenswrapper[4835]: E0216 15:37:58.379078 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:37:58 crc kubenswrapper[4835]: I0216 15:37:58.702491 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bf6bf" podUID="f151bbe5-e19f-40dc-a7f2-f4502d75f3ea" containerName="registry-server" containerID="cri-o://623fee07d372f134738c7872c07584edd86630fbc84dc53959034d63d1d21bc4" gracePeriod=2 Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.112038 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jdpfv"] Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.112296 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jdpfv" podUID="ba58bab8-1392-4c38-a0e1-c69a21eccb81" containerName="registry-server" containerID="cri-o://aa986276e3e00d0aaf4fc00732e393c59b3f3454d4403655791574fa5d0ac895" gracePeriod=2 Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.178502 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bf6bf" Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.263509 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-utilities\") pod \"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea\" (UID: \"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea\") " Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.263698 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6bgv\" (UniqueName: \"kubernetes.io/projected/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-kube-api-access-b6bgv\") pod \"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea\" (UID: \"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea\") " Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.263747 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-catalog-content\") pod \"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea\" (UID: \"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea\") " Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.264440 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-utilities" (OuterVolumeSpecName: "utilities") pod "f151bbe5-e19f-40dc-a7f2-f4502d75f3ea" (UID: "f151bbe5-e19f-40dc-a7f2-f4502d75f3ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.268469 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-kube-api-access-b6bgv" (OuterVolumeSpecName: "kube-api-access-b6bgv") pod "f151bbe5-e19f-40dc-a7f2-f4502d75f3ea" (UID: "f151bbe5-e19f-40dc-a7f2-f4502d75f3ea"). InnerVolumeSpecName "kube-api-access-b6bgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.366702 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6bgv\" (UniqueName: \"kubernetes.io/projected/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-kube-api-access-b6bgv\") on node \"crc\" DevicePath \"\"" Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.366738 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:37:59 crc kubenswrapper[4835]: E0216 15:37:59.380247 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.715775 4835 generic.go:334] "Generic (PLEG): container finished" podID="f151bbe5-e19f-40dc-a7f2-f4502d75f3ea" containerID="623fee07d372f134738c7872c07584edd86630fbc84dc53959034d63d1d21bc4" exitCode=0 Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.715846 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf6bf" event={"ID":"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea","Type":"ContainerDied","Data":"623fee07d372f134738c7872c07584edd86630fbc84dc53959034d63d1d21bc4"} Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.715875 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bf6bf" event={"ID":"f151bbe5-e19f-40dc-a7f2-f4502d75f3ea","Type":"ContainerDied","Data":"c6fff994bfe58c9a59da6c0b1a159b5b6ef4e962fe421e6cc4cd9f82c4f14349"} Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.715868 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bf6bf" Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.715932 4835 scope.go:117] "RemoveContainer" containerID="623fee07d372f134738c7872c07584edd86630fbc84dc53959034d63d1d21bc4" Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.718614 4835 generic.go:334] "Generic (PLEG): container finished" podID="ba58bab8-1392-4c38-a0e1-c69a21eccb81" containerID="aa986276e3e00d0aaf4fc00732e393c59b3f3454d4403655791574fa5d0ac895" exitCode=0 Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.718636 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdpfv" event={"ID":"ba58bab8-1392-4c38-a0e1-c69a21eccb81","Type":"ContainerDied","Data":"aa986276e3e00d0aaf4fc00732e393c59b3f3454d4403655791574fa5d0ac895"} Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.733595 4835 scope.go:117] "RemoveContainer" containerID="f8c3e3162c3701fe0415e88e8bf2937e6deace2ac3f0be7a2b9b60bbe6dc5856" Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.756610 4835 scope.go:117] "RemoveContainer" containerID="64cacfd49438a56273079767cff8f59c62628900c2c5e9594fdcd9229595ca71" Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.787581 4835 scope.go:117] "RemoveContainer" containerID="623fee07d372f134738c7872c07584edd86630fbc84dc53959034d63d1d21bc4" Feb 16 15:37:59 crc kubenswrapper[4835]: E0216 15:37:59.788027 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"623fee07d372f134738c7872c07584edd86630fbc84dc53959034d63d1d21bc4\": container with ID starting with 623fee07d372f134738c7872c07584edd86630fbc84dc53959034d63d1d21bc4 not found: ID does not exist" containerID="623fee07d372f134738c7872c07584edd86630fbc84dc53959034d63d1d21bc4" Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.788059 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"623fee07d372f134738c7872c07584edd86630fbc84dc53959034d63d1d21bc4"} err="failed to get container status \"623fee07d372f134738c7872c07584edd86630fbc84dc53959034d63d1d21bc4\": rpc error: code = NotFound desc = could not find container \"623fee07d372f134738c7872c07584edd86630fbc84dc53959034d63d1d21bc4\": container with ID starting with 623fee07d372f134738c7872c07584edd86630fbc84dc53959034d63d1d21bc4 not found: ID does not exist" Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.788080 4835 scope.go:117] "RemoveContainer" containerID="f8c3e3162c3701fe0415e88e8bf2937e6deace2ac3f0be7a2b9b60bbe6dc5856" Feb 16 15:37:59 crc kubenswrapper[4835]: E0216 15:37:59.788412 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8c3e3162c3701fe0415e88e8bf2937e6deace2ac3f0be7a2b9b60bbe6dc5856\": container with ID starting with f8c3e3162c3701fe0415e88e8bf2937e6deace2ac3f0be7a2b9b60bbe6dc5856 not found: ID does not exist" containerID="f8c3e3162c3701fe0415e88e8bf2937e6deace2ac3f0be7a2b9b60bbe6dc5856" Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.788434 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8c3e3162c3701fe0415e88e8bf2937e6deace2ac3f0be7a2b9b60bbe6dc5856"} err="failed to get container status \"f8c3e3162c3701fe0415e88e8bf2937e6deace2ac3f0be7a2b9b60bbe6dc5856\": rpc error: code = NotFound desc = could not find container \"f8c3e3162c3701fe0415e88e8bf2937e6deace2ac3f0be7a2b9b60bbe6dc5856\": container with ID starting with f8c3e3162c3701fe0415e88e8bf2937e6deace2ac3f0be7a2b9b60bbe6dc5856 not found: ID does not exist" Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.788448 4835 scope.go:117] "RemoveContainer" containerID="64cacfd49438a56273079767cff8f59c62628900c2c5e9594fdcd9229595ca71" Feb 16 15:37:59 crc kubenswrapper[4835]: E0216 15:37:59.788771 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64cacfd49438a56273079767cff8f59c62628900c2c5e9594fdcd9229595ca71\": container with ID starting with 64cacfd49438a56273079767cff8f59c62628900c2c5e9594fdcd9229595ca71 not found: ID does not exist" containerID="64cacfd49438a56273079767cff8f59c62628900c2c5e9594fdcd9229595ca71" Feb 16 15:37:59 crc kubenswrapper[4835]: I0216 15:37:59.788794 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64cacfd49438a56273079767cff8f59c62628900c2c5e9594fdcd9229595ca71"} err="failed to get container status \"64cacfd49438a56273079767cff8f59c62628900c2c5e9594fdcd9229595ca71\": rpc error: code = NotFound desc = could not find container \"64cacfd49438a56273079767cff8f59c62628900c2c5e9594fdcd9229595ca71\": container with ID starting with 64cacfd49438a56273079767cff8f59c62628900c2c5e9594fdcd9229595ca71 not found: ID does not exist" Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.014828 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f151bbe5-e19f-40dc-a7f2-f4502d75f3ea" (UID: "f151bbe5-e19f-40dc-a7f2-f4502d75f3ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.088681 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdpfv" Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.102127 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7znw\" (UniqueName: \"kubernetes.io/projected/ba58bab8-1392-4c38-a0e1-c69a21eccb81-kube-api-access-q7znw\") pod \"ba58bab8-1392-4c38-a0e1-c69a21eccb81\" (UID: \"ba58bab8-1392-4c38-a0e1-c69a21eccb81\") " Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.102188 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba58bab8-1392-4c38-a0e1-c69a21eccb81-catalog-content\") pod \"ba58bab8-1392-4c38-a0e1-c69a21eccb81\" (UID: \"ba58bab8-1392-4c38-a0e1-c69a21eccb81\") " Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.102318 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba58bab8-1392-4c38-a0e1-c69a21eccb81-utilities\") pod \"ba58bab8-1392-4c38-a0e1-c69a21eccb81\" (UID: \"ba58bab8-1392-4c38-a0e1-c69a21eccb81\") " Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.103803 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.106221 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba58bab8-1392-4c38-a0e1-c69a21eccb81-utilities" (OuterVolumeSpecName: "utilities") pod "ba58bab8-1392-4c38-a0e1-c69a21eccb81" (UID: "ba58bab8-1392-4c38-a0e1-c69a21eccb81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.106874 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bf6bf"] Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.112979 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba58bab8-1392-4c38-a0e1-c69a21eccb81-kube-api-access-q7znw" (OuterVolumeSpecName: "kube-api-access-q7znw") pod "ba58bab8-1392-4c38-a0e1-c69a21eccb81" (UID: "ba58bab8-1392-4c38-a0e1-c69a21eccb81"). InnerVolumeSpecName "kube-api-access-q7znw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.128159 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bf6bf"] Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.164318 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba58bab8-1392-4c38-a0e1-c69a21eccb81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba58bab8-1392-4c38-a0e1-c69a21eccb81" (UID: "ba58bab8-1392-4c38-a0e1-c69a21eccb81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.205002 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7znw\" (UniqueName: \"kubernetes.io/projected/ba58bab8-1392-4c38-a0e1-c69a21eccb81-kube-api-access-q7znw\") on node \"crc\" DevicePath \"\"" Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.205039 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba58bab8-1392-4c38-a0e1-c69a21eccb81-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.205050 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba58bab8-1392-4c38-a0e1-c69a21eccb81-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.740315 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdpfv" event={"ID":"ba58bab8-1392-4c38-a0e1-c69a21eccb81","Type":"ContainerDied","Data":"5ab2e887b98d55aff9b7d6c3e75c101e3736fb4258a0ee75aa5c4148c6571c1e"} Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.740374 4835 scope.go:117] "RemoveContainer" containerID="aa986276e3e00d0aaf4fc00732e393c59b3f3454d4403655791574fa5d0ac895" Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.740426 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdpfv" Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.765280 4835 scope.go:117] "RemoveContainer" containerID="7d3e8875ffb7945ebd6caead531b44a8685e565a092fc0524be0ca4f87175e87" Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.792079 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jdpfv"] Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.800761 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jdpfv"] Feb 16 15:38:00 crc kubenswrapper[4835]: I0216 15:38:00.811830 4835 scope.go:117] "RemoveContainer" containerID="6e9dbe2752edcc8e2a209b962eabcbf254f12cc8dc51578eec2f40fefb4b8a20" Feb 16 15:38:01 crc kubenswrapper[4835]: I0216 15:38:01.392362 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba58bab8-1392-4c38-a0e1-c69a21eccb81" path="/var/lib/kubelet/pods/ba58bab8-1392-4c38-a0e1-c69a21eccb81/volumes" Feb 16 15:38:01 crc kubenswrapper[4835]: I0216 15:38:01.393295 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f151bbe5-e19f-40dc-a7f2-f4502d75f3ea" path="/var/lib/kubelet/pods/f151bbe5-e19f-40dc-a7f2-f4502d75f3ea/volumes" Feb 16 15:38:10 crc kubenswrapper[4835]: E0216 15:38:10.382487 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:38:12 crc kubenswrapper[4835]: I0216 15:38:12.379235 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:38:12 crc kubenswrapper[4835]: E0216 15:38:12.379852 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:38:21 crc kubenswrapper[4835]: E0216 15:38:21.393383 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:38:25 crc kubenswrapper[4835]: I0216 15:38:25.379864 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:38:26 crc kubenswrapper[4835]: I0216 15:38:26.026240 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerStarted","Data":"a9c097a6ce48683144557a11d651a91fd3f122dd3409f9488f75b0cb97938c5c"} Feb 16 15:38:33 crc kubenswrapper[4835]: I0216 15:38:33.038847 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-bc6t7"] Feb 16 15:38:33 crc kubenswrapper[4835]: I0216 15:38:33.050132 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-bc6t7"] Feb 16 15:38:33 crc kubenswrapper[4835]: I0216 15:38:33.389442 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad3513d5-d2c9-48aa-8264-a7728591bf53" path="/var/lib/kubelet/pods/ad3513d5-d2c9-48aa-8264-a7728591bf53/volumes" Feb 16 15:38:35 crc kubenswrapper[4835]: E0216 15:38:35.380295 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:38:36 crc kubenswrapper[4835]: I0216 15:38:36.121762 4835 scope.go:117] "RemoveContainer" containerID="86487022e8d0a5e138828d7edafbdf59326490463e88e4a30b258ec8a8ee3d78" Feb 16 15:38:36 crc kubenswrapper[4835]: I0216 15:38:36.157993 4835 scope.go:117] "RemoveContainer" containerID="1df3385d1f34bcbf22783a9e95e43343e0835c13fad43cc54c6a970978a3135f" Feb 16 15:38:36 crc kubenswrapper[4835]: I0216 15:38:36.206262 4835 scope.go:117] "RemoveContainer" containerID="b7a4d0123bb745b3d3a98a494901c671a07757f07dc33822953be3ea70cc95ee" Feb 16 15:38:46 crc kubenswrapper[4835]: E0216 15:38:46.381595 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:39:00 crc kubenswrapper[4835]: E0216 15:39:00.385805 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:39:11 crc kubenswrapper[4835]: E0216 15:39:11.386989 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:39:25 crc kubenswrapper[4835]: E0216 15:39:25.381926 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:39:39 crc kubenswrapper[4835]: E0216 15:39:39.381231 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:39:54 crc kubenswrapper[4835]: E0216 15:39:54.380769 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:40:05 crc kubenswrapper[4835]: E0216 15:40:05.380802 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:40:20 crc kubenswrapper[4835]: E0216 15:40:20.380155 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:40:31 crc kubenswrapper[4835]: E0216 15:40:31.386655 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:40:45 crc kubenswrapper[4835]: E0216 15:40:45.381190 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:40:48 crc kubenswrapper[4835]: I0216 15:40:48.586303 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:40:48 crc kubenswrapper[4835]: I0216 15:40:48.586733 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:40:58 crc kubenswrapper[4835]: E0216 15:40:58.381013 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:41:12 crc kubenswrapper[4835]: E0216 15:41:12.381129 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.292804 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n5hnk"] Feb 16 15:41:18 crc kubenswrapper[4835]: E0216 15:41:18.293845 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f151bbe5-e19f-40dc-a7f2-f4502d75f3ea" containerName="extract-utilities" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.293863 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f151bbe5-e19f-40dc-a7f2-f4502d75f3ea" containerName="extract-utilities" Feb 16 15:41:18 crc kubenswrapper[4835]: E0216 15:41:18.293880 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba58bab8-1392-4c38-a0e1-c69a21eccb81" containerName="extract-utilities" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.293890 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba58bab8-1392-4c38-a0e1-c69a21eccb81" containerName="extract-utilities" Feb 16 15:41:18 crc kubenswrapper[4835]: E0216 15:41:18.293904 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba58bab8-1392-4c38-a0e1-c69a21eccb81" containerName="registry-server" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.293912 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba58bab8-1392-4c38-a0e1-c69a21eccb81" containerName="registry-server" Feb 16 15:41:18 crc kubenswrapper[4835]: E0216 15:41:18.293937 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f151bbe5-e19f-40dc-a7f2-f4502d75f3ea" containerName="registry-server" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.293946 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f151bbe5-e19f-40dc-a7f2-f4502d75f3ea" containerName="registry-server" Feb 16 15:41:18 crc kubenswrapper[4835]: E0216 15:41:18.293964 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f151bbe5-e19f-40dc-a7f2-f4502d75f3ea" containerName="extract-content" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.293972 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f151bbe5-e19f-40dc-a7f2-f4502d75f3ea" containerName="extract-content" Feb 16 15:41:18 crc kubenswrapper[4835]: E0216 15:41:18.293989 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba58bab8-1392-4c38-a0e1-c69a21eccb81" containerName="extract-content" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.293996 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba58bab8-1392-4c38-a0e1-c69a21eccb81" containerName="extract-content" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.294272 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f151bbe5-e19f-40dc-a7f2-f4502d75f3ea" containerName="registry-server" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.294294 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba58bab8-1392-4c38-a0e1-c69a21eccb81" containerName="registry-server" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.296158 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5hnk" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.321475 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n5hnk"] Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.451061 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd98h\" (UniqueName: \"kubernetes.io/projected/1032c10a-b55a-40f0-919f-7569cbcd9c2f-kube-api-access-zd98h\") pod \"redhat-operators-n5hnk\" (UID: \"1032c10a-b55a-40f0-919f-7569cbcd9c2f\") " pod="openshift-marketplace/redhat-operators-n5hnk" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.451175 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1032c10a-b55a-40f0-919f-7569cbcd9c2f-catalog-content\") pod \"redhat-operators-n5hnk\" (UID: \"1032c10a-b55a-40f0-919f-7569cbcd9c2f\") " pod="openshift-marketplace/redhat-operators-n5hnk" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.451205 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1032c10a-b55a-40f0-919f-7569cbcd9c2f-utilities\") pod \"redhat-operators-n5hnk\" (UID: \"1032c10a-b55a-40f0-919f-7569cbcd9c2f\") " pod="openshift-marketplace/redhat-operators-n5hnk" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.553177 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd98h\" (UniqueName: \"kubernetes.io/projected/1032c10a-b55a-40f0-919f-7569cbcd9c2f-kube-api-access-zd98h\") pod \"redhat-operators-n5hnk\" (UID: \"1032c10a-b55a-40f0-919f-7569cbcd9c2f\") " pod="openshift-marketplace/redhat-operators-n5hnk" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.553335 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1032c10a-b55a-40f0-919f-7569cbcd9c2f-catalog-content\") pod \"redhat-operators-n5hnk\" (UID: \"1032c10a-b55a-40f0-919f-7569cbcd9c2f\") " pod="openshift-marketplace/redhat-operators-n5hnk" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.553362 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1032c10a-b55a-40f0-919f-7569cbcd9c2f-utilities\") pod \"redhat-operators-n5hnk\" (UID: \"1032c10a-b55a-40f0-919f-7569cbcd9c2f\") " pod="openshift-marketplace/redhat-operators-n5hnk" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.554010 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1032c10a-b55a-40f0-919f-7569cbcd9c2f-catalog-content\") pod \"redhat-operators-n5hnk\" (UID: \"1032c10a-b55a-40f0-919f-7569cbcd9c2f\") " pod="openshift-marketplace/redhat-operators-n5hnk" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.554042 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1032c10a-b55a-40f0-919f-7569cbcd9c2f-utilities\") pod \"redhat-operators-n5hnk\" (UID: \"1032c10a-b55a-40f0-919f-7569cbcd9c2f\") " pod="openshift-marketplace/redhat-operators-n5hnk" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.582288 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd98h\" (UniqueName: \"kubernetes.io/projected/1032c10a-b55a-40f0-919f-7569cbcd9c2f-kube-api-access-zd98h\") pod \"redhat-operators-n5hnk\" (UID: \"1032c10a-b55a-40f0-919f-7569cbcd9c2f\") " pod="openshift-marketplace/redhat-operators-n5hnk" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.586913 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.587145 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:41:18 crc kubenswrapper[4835]: I0216 15:41:18.620690 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5hnk" Feb 16 15:41:19 crc kubenswrapper[4835]: W0216 15:41:19.072160 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1032c10a_b55a_40f0_919f_7569cbcd9c2f.slice/crio-8c34f710afb4c8ffd677a4d0976545169287b55f20dc49b5cd755e49f341bf94 WatchSource:0}: Error finding container 8c34f710afb4c8ffd677a4d0976545169287b55f20dc49b5cd755e49f341bf94: Status 404 returned error can't find the container with id 8c34f710afb4c8ffd677a4d0976545169287b55f20dc49b5cd755e49f341bf94 Feb 16 15:41:19 crc kubenswrapper[4835]: I0216 15:41:19.073403 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n5hnk"] Feb 16 15:41:19 crc kubenswrapper[4835]: I0216 15:41:19.381202 4835 generic.go:334] "Generic (PLEG): container finished" podID="1032c10a-b55a-40f0-919f-7569cbcd9c2f" containerID="26b04f8a84db20aa6af95a2092d4b6bf8b043bfabad06e1d6fe89fb39fe7cf0d" exitCode=0 Feb 16 15:41:19 crc kubenswrapper[4835]: I0216 15:41:19.382849 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:41:19 crc kubenswrapper[4835]: I0216 15:41:19.389648 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5hnk" event={"ID":"1032c10a-b55a-40f0-919f-7569cbcd9c2f","Type":"ContainerDied","Data":"26b04f8a84db20aa6af95a2092d4b6bf8b043bfabad06e1d6fe89fb39fe7cf0d"} Feb 16 15:41:19 crc kubenswrapper[4835]: I0216 15:41:19.389698 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5hnk" event={"ID":"1032c10a-b55a-40f0-919f-7569cbcd9c2f","Type":"ContainerStarted","Data":"8c34f710afb4c8ffd677a4d0976545169287b55f20dc49b5cd755e49f341bf94"} Feb 16 15:41:20 crc kubenswrapper[4835]: I0216 15:41:20.391982 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5hnk" event={"ID":"1032c10a-b55a-40f0-919f-7569cbcd9c2f","Type":"ContainerStarted","Data":"8bec6c951f0a5c2b6d07ca9f9b0eaba60ff5ebb15bbbcd10f480aef89dd9eee4"} Feb 16 15:41:21 crc kubenswrapper[4835]: I0216 15:41:21.403737 4835 generic.go:334] "Generic (PLEG): container finished" podID="1032c10a-b55a-40f0-919f-7569cbcd9c2f" containerID="8bec6c951f0a5c2b6d07ca9f9b0eaba60ff5ebb15bbbcd10f480aef89dd9eee4" exitCode=0 Feb 16 15:41:21 crc kubenswrapper[4835]: I0216 15:41:21.403790 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5hnk" event={"ID":"1032c10a-b55a-40f0-919f-7569cbcd9c2f","Type":"ContainerDied","Data":"8bec6c951f0a5c2b6d07ca9f9b0eaba60ff5ebb15bbbcd10f480aef89dd9eee4"} Feb 16 15:41:22 crc kubenswrapper[4835]: I0216 15:41:22.479969 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5hnk" event={"ID":"1032c10a-b55a-40f0-919f-7569cbcd9c2f","Type":"ContainerStarted","Data":"00854d69386bb357862b41eadae7c146f35b716cbf9b5cdadf70922df7e0bb87"} Feb 16 15:41:22 crc kubenswrapper[4835]: I0216 15:41:22.517826 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n5hnk" podStartSLOduration=2.036032605 podStartE2EDuration="4.517799586s" podCreationTimestamp="2026-02-16 15:41:18 +0000 UTC" firstStartedPulling="2026-02-16 15:41:19.382634636 +0000 UTC m=+2028.674627531" lastFinishedPulling="2026-02-16 15:41:21.864401597 +0000 UTC m=+2031.156394512" observedRunningTime="2026-02-16 15:41:22.506257893 +0000 UTC m=+2031.798250818" watchObservedRunningTime="2026-02-16 15:41:22.517799586 +0000 UTC m=+2031.809792481" Feb 16 15:41:25 crc kubenswrapper[4835]: E0216 15:41:25.498838 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:41:25 crc kubenswrapper[4835]: E0216 15:41:25.499113 4835 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:41:25 crc kubenswrapper[4835]: E0216 15:41:25.499234 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqdtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-sgzmb_openstack(3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:41:25 crc kubenswrapper[4835]: E0216 15:41:25.500572 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:41:28 crc kubenswrapper[4835]: I0216 15:41:28.621453 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n5hnk" Feb 16 15:41:28 crc kubenswrapper[4835]: I0216 15:41:28.621852 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n5hnk" Feb 16 15:41:29 crc kubenswrapper[4835]: I0216 15:41:29.669048 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n5hnk" podUID="1032c10a-b55a-40f0-919f-7569cbcd9c2f" containerName="registry-server" probeResult="failure" output=< Feb 16 15:41:29 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Feb 16 15:41:29 crc kubenswrapper[4835]: > Feb 16 15:41:38 crc kubenswrapper[4835]: I0216 15:41:38.697060 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n5hnk" Feb 16 15:41:38 crc kubenswrapper[4835]: I0216 15:41:38.771024 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n5hnk" Feb 16 15:41:38 crc kubenswrapper[4835]: I0216 15:41:38.944790 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n5hnk"] Feb 16 15:41:40 crc kubenswrapper[4835]: E0216 15:41:40.379878 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:41:40 crc kubenswrapper[4835]: I0216 15:41:40.674065 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n5hnk" podUID="1032c10a-b55a-40f0-919f-7569cbcd9c2f" containerName="registry-server" containerID="cri-o://00854d69386bb357862b41eadae7c146f35b716cbf9b5cdadf70922df7e0bb87" gracePeriod=2 Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.232912 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5hnk" Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.375573 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd98h\" (UniqueName: \"kubernetes.io/projected/1032c10a-b55a-40f0-919f-7569cbcd9c2f-kube-api-access-zd98h\") pod \"1032c10a-b55a-40f0-919f-7569cbcd9c2f\" (UID: \"1032c10a-b55a-40f0-919f-7569cbcd9c2f\") " Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.376079 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1032c10a-b55a-40f0-919f-7569cbcd9c2f-catalog-content\") pod \"1032c10a-b55a-40f0-919f-7569cbcd9c2f\" (UID: \"1032c10a-b55a-40f0-919f-7569cbcd9c2f\") " Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.376316 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1032c10a-b55a-40f0-919f-7569cbcd9c2f-utilities\") pod \"1032c10a-b55a-40f0-919f-7569cbcd9c2f\" (UID: \"1032c10a-b55a-40f0-919f-7569cbcd9c2f\") " Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.376891 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1032c10a-b55a-40f0-919f-7569cbcd9c2f-utilities" (OuterVolumeSpecName: "utilities") pod "1032c10a-b55a-40f0-919f-7569cbcd9c2f" (UID: "1032c10a-b55a-40f0-919f-7569cbcd9c2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.381367 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1032c10a-b55a-40f0-919f-7569cbcd9c2f-kube-api-access-zd98h" (OuterVolumeSpecName: "kube-api-access-zd98h") pod "1032c10a-b55a-40f0-919f-7569cbcd9c2f" (UID: "1032c10a-b55a-40f0-919f-7569cbcd9c2f"). InnerVolumeSpecName "kube-api-access-zd98h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.478441 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd98h\" (UniqueName: \"kubernetes.io/projected/1032c10a-b55a-40f0-919f-7569cbcd9c2f-kube-api-access-zd98h\") on node \"crc\" DevicePath \"\"" Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.478577 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1032c10a-b55a-40f0-919f-7569cbcd9c2f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.499707 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1032c10a-b55a-40f0-919f-7569cbcd9c2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1032c10a-b55a-40f0-919f-7569cbcd9c2f" (UID: "1032c10a-b55a-40f0-919f-7569cbcd9c2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.580847 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1032c10a-b55a-40f0-919f-7569cbcd9c2f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.687239 4835 generic.go:334] "Generic (PLEG): container finished" podID="1032c10a-b55a-40f0-919f-7569cbcd9c2f" containerID="00854d69386bb357862b41eadae7c146f35b716cbf9b5cdadf70922df7e0bb87" exitCode=0 Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.687284 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5hnk" event={"ID":"1032c10a-b55a-40f0-919f-7569cbcd9c2f","Type":"ContainerDied","Data":"00854d69386bb357862b41eadae7c146f35b716cbf9b5cdadf70922df7e0bb87"} Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.687330 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5hnk" event={"ID":"1032c10a-b55a-40f0-919f-7569cbcd9c2f","Type":"ContainerDied","Data":"8c34f710afb4c8ffd677a4d0976545169287b55f20dc49b5cd755e49f341bf94"} Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.687348 4835 scope.go:117] "RemoveContainer" containerID="00854d69386bb357862b41eadae7c146f35b716cbf9b5cdadf70922df7e0bb87" Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.687362 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5hnk" Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.710647 4835 scope.go:117] "RemoveContainer" containerID="8bec6c951f0a5c2b6d07ca9f9b0eaba60ff5ebb15bbbcd10f480aef89dd9eee4" Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.731219 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n5hnk"] Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.742857 4835 scope.go:117] "RemoveContainer" containerID="26b04f8a84db20aa6af95a2092d4b6bf8b043bfabad06e1d6fe89fb39fe7cf0d" Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.742929 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n5hnk"] Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.785639 4835 scope.go:117] "RemoveContainer" containerID="00854d69386bb357862b41eadae7c146f35b716cbf9b5cdadf70922df7e0bb87" Feb 16 15:41:41 crc kubenswrapper[4835]: E0216 15:41:41.786092 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00854d69386bb357862b41eadae7c146f35b716cbf9b5cdadf70922df7e0bb87\": container with ID starting with 00854d69386bb357862b41eadae7c146f35b716cbf9b5cdadf70922df7e0bb87 not found: ID does not exist" containerID="00854d69386bb357862b41eadae7c146f35b716cbf9b5cdadf70922df7e0bb87" Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.786122 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00854d69386bb357862b41eadae7c146f35b716cbf9b5cdadf70922df7e0bb87"} err="failed to get container status \"00854d69386bb357862b41eadae7c146f35b716cbf9b5cdadf70922df7e0bb87\": rpc error: code = NotFound desc = could not find container \"00854d69386bb357862b41eadae7c146f35b716cbf9b5cdadf70922df7e0bb87\": container with ID starting with 00854d69386bb357862b41eadae7c146f35b716cbf9b5cdadf70922df7e0bb87 not found: ID does not exist" Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.786141 4835 scope.go:117] "RemoveContainer" containerID="8bec6c951f0a5c2b6d07ca9f9b0eaba60ff5ebb15bbbcd10f480aef89dd9eee4" Feb 16 15:41:41 crc kubenswrapper[4835]: E0216 15:41:41.786610 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bec6c951f0a5c2b6d07ca9f9b0eaba60ff5ebb15bbbcd10f480aef89dd9eee4\": container with ID starting with 8bec6c951f0a5c2b6d07ca9f9b0eaba60ff5ebb15bbbcd10f480aef89dd9eee4 not found: ID does not exist" containerID="8bec6c951f0a5c2b6d07ca9f9b0eaba60ff5ebb15bbbcd10f480aef89dd9eee4" Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.786656 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bec6c951f0a5c2b6d07ca9f9b0eaba60ff5ebb15bbbcd10f480aef89dd9eee4"} err="failed to get container status \"8bec6c951f0a5c2b6d07ca9f9b0eaba60ff5ebb15bbbcd10f480aef89dd9eee4\": rpc error: code = NotFound desc = could not find container \"8bec6c951f0a5c2b6d07ca9f9b0eaba60ff5ebb15bbbcd10f480aef89dd9eee4\": container with ID starting with 8bec6c951f0a5c2b6d07ca9f9b0eaba60ff5ebb15bbbcd10f480aef89dd9eee4 not found: ID does not exist" Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.786688 4835 scope.go:117] "RemoveContainer" containerID="26b04f8a84db20aa6af95a2092d4b6bf8b043bfabad06e1d6fe89fb39fe7cf0d" Feb 16 15:41:41 crc kubenswrapper[4835]: E0216 15:41:41.787184 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26b04f8a84db20aa6af95a2092d4b6bf8b043bfabad06e1d6fe89fb39fe7cf0d\": container with ID starting with 26b04f8a84db20aa6af95a2092d4b6bf8b043bfabad06e1d6fe89fb39fe7cf0d not found: ID does not exist" containerID="26b04f8a84db20aa6af95a2092d4b6bf8b043bfabad06e1d6fe89fb39fe7cf0d" Feb 16 15:41:41 crc kubenswrapper[4835]: I0216 15:41:41.787224 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26b04f8a84db20aa6af95a2092d4b6bf8b043bfabad06e1d6fe89fb39fe7cf0d"} err="failed to get container status \"26b04f8a84db20aa6af95a2092d4b6bf8b043bfabad06e1d6fe89fb39fe7cf0d\": rpc error: code = NotFound desc = could not find container \"26b04f8a84db20aa6af95a2092d4b6bf8b043bfabad06e1d6fe89fb39fe7cf0d\": container with ID starting with 26b04f8a84db20aa6af95a2092d4b6bf8b043bfabad06e1d6fe89fb39fe7cf0d not found: ID does not exist" Feb 16 15:41:43 crc kubenswrapper[4835]: I0216 15:41:43.400681 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1032c10a-b55a-40f0-919f-7569cbcd9c2f" path="/var/lib/kubelet/pods/1032c10a-b55a-40f0-919f-7569cbcd9c2f/volumes" Feb 16 15:41:48 crc kubenswrapper[4835]: I0216 15:41:48.586994 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:41:48 crc kubenswrapper[4835]: I0216 15:41:48.587679 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:41:48 crc kubenswrapper[4835]: I0216 15:41:48.587774 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:41:48 crc kubenswrapper[4835]: I0216 15:41:48.588997 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a9c097a6ce48683144557a11d651a91fd3f122dd3409f9488f75b0cb97938c5c"} pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:41:48 crc kubenswrapper[4835]: I0216 15:41:48.589095 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" containerID="cri-o://a9c097a6ce48683144557a11d651a91fd3f122dd3409f9488f75b0cb97938c5c" gracePeriod=600 Feb 16 15:41:48 crc kubenswrapper[4835]: I0216 15:41:48.765431 4835 generic.go:334] "Generic (PLEG): container finished" podID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerID="a9c097a6ce48683144557a11d651a91fd3f122dd3409f9488f75b0cb97938c5c" exitCode=0 Feb 16 15:41:48 crc kubenswrapper[4835]: I0216 15:41:48.765479 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerDied","Data":"a9c097a6ce48683144557a11d651a91fd3f122dd3409f9488f75b0cb97938c5c"} Feb 16 15:41:48 crc kubenswrapper[4835]: I0216 15:41:48.765520 4835 scope.go:117] "RemoveContainer" containerID="d39c4a41854ee6a12b77cb4a9b8f896e84ae214e3da59692fadd9030abe02624" Feb 16 15:41:49 crc kubenswrapper[4835]: I0216 15:41:49.781143 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerStarted","Data":"3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19"} Feb 16 15:41:51 crc kubenswrapper[4835]: E0216 15:41:51.387007 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:42:05 crc kubenswrapper[4835]: E0216 15:42:05.381381 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:42:17 crc kubenswrapper[4835]: E0216 15:42:17.380118 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:42:29 crc kubenswrapper[4835]: E0216 15:42:29.381352 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:42:40 crc kubenswrapper[4835]: E0216 15:42:40.381139 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:42:55 crc kubenswrapper[4835]: E0216 15:42:55.380869 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:43:06 crc kubenswrapper[4835]: E0216 15:43:06.382714 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:43:19 crc kubenswrapper[4835]: E0216 15:43:19.381499 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:43:32 crc kubenswrapper[4835]: E0216 15:43:32.380701 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:43:47 crc kubenswrapper[4835]: E0216 15:43:47.382037 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:43:48 crc kubenswrapper[4835]: I0216 15:43:48.586284 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:43:48 crc kubenswrapper[4835]: I0216 15:43:48.586714 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:43:58 crc kubenswrapper[4835]: E0216 15:43:58.380161 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:44:09 crc kubenswrapper[4835]: E0216 15:44:09.382438 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:44:19 crc kubenswrapper[4835]: I0216 15:44:19.300881 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:44:19 crc kubenswrapper[4835]: I0216 15:44:19.301351 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:44:20 crc kubenswrapper[4835]: E0216 15:44:20.380823 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:44:31 crc kubenswrapper[4835]: E0216 15:44:31.387076 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:44:46 crc kubenswrapper[4835]: E0216 15:44:46.382568 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:44:48 crc kubenswrapper[4835]: I0216 15:44:48.587662 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:44:48 crc kubenswrapper[4835]: I0216 15:44:48.588024 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:44:48 crc kubenswrapper[4835]: I0216 15:44:48.588078 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:44:48 crc kubenswrapper[4835]: I0216 15:44:48.588879 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19"} pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:44:48 crc kubenswrapper[4835]: I0216 15:44:48.588936 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" containerID="cri-o://3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" gracePeriod=600 Feb 16 15:44:48 crc kubenswrapper[4835]: E0216 15:44:48.748119 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:44:49 crc kubenswrapper[4835]: I0216 15:44:49.632424 4835 generic.go:334] "Generic (PLEG): container finished" podID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" exitCode=0 Feb 16 15:44:49 crc kubenswrapper[4835]: I0216 15:44:49.632489 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerDied","Data":"3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19"} Feb 16 15:44:49 crc kubenswrapper[4835]: I0216 15:44:49.632572 4835 scope.go:117] "RemoveContainer" containerID="a9c097a6ce48683144557a11d651a91fd3f122dd3409f9488f75b0cb97938c5c" Feb 16 15:44:49 crc kubenswrapper[4835]: I0216 15:44:49.635229 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:44:49 crc kubenswrapper[4835]: E0216 15:44:49.636198 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:44:57 crc kubenswrapper[4835]: E0216 15:44:57.384318 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.161123 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq"] Feb 16 15:45:00 crc kubenswrapper[4835]: E0216 15:45:00.163722 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1032c10a-b55a-40f0-919f-7569cbcd9c2f" containerName="extract-content" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.163739 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1032c10a-b55a-40f0-919f-7569cbcd9c2f" containerName="extract-content" Feb 16 15:45:00 crc kubenswrapper[4835]: E0216 15:45:00.163757 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1032c10a-b55a-40f0-919f-7569cbcd9c2f" containerName="registry-server" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.163765 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1032c10a-b55a-40f0-919f-7569cbcd9c2f" containerName="registry-server" Feb 16 15:45:00 crc kubenswrapper[4835]: E0216 15:45:00.163775 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1032c10a-b55a-40f0-919f-7569cbcd9c2f" containerName="extract-utilities" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.163782 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1032c10a-b55a-40f0-919f-7569cbcd9c2f" containerName="extract-utilities" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.164033 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1032c10a-b55a-40f0-919f-7569cbcd9c2f" containerName="registry-server" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.164916 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.168212 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.168677 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.176162 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq"] Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.194347 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsnlz\" (UniqueName: \"kubernetes.io/projected/6b922772-403b-4302-b153-70a6c6443187-kube-api-access-nsnlz\") pod \"collect-profiles-29520945-7t2lq\" (UID: \"6b922772-403b-4302-b153-70a6c6443187\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.194453 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b922772-403b-4302-b153-70a6c6443187-secret-volume\") pod \"collect-profiles-29520945-7t2lq\" (UID: \"6b922772-403b-4302-b153-70a6c6443187\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.194860 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b922772-403b-4302-b153-70a6c6443187-config-volume\") pod \"collect-profiles-29520945-7t2lq\" (UID: \"6b922772-403b-4302-b153-70a6c6443187\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.296716 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b922772-403b-4302-b153-70a6c6443187-config-volume\") pod \"collect-profiles-29520945-7t2lq\" (UID: \"6b922772-403b-4302-b153-70a6c6443187\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.296860 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsnlz\" (UniqueName: \"kubernetes.io/projected/6b922772-403b-4302-b153-70a6c6443187-kube-api-access-nsnlz\") pod \"collect-profiles-29520945-7t2lq\" (UID: \"6b922772-403b-4302-b153-70a6c6443187\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.296900 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b922772-403b-4302-b153-70a6c6443187-secret-volume\") pod \"collect-profiles-29520945-7t2lq\" (UID: \"6b922772-403b-4302-b153-70a6c6443187\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.297921 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b922772-403b-4302-b153-70a6c6443187-config-volume\") pod \"collect-profiles-29520945-7t2lq\" (UID: \"6b922772-403b-4302-b153-70a6c6443187\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.302598 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b922772-403b-4302-b153-70a6c6443187-secret-volume\") pod \"collect-profiles-29520945-7t2lq\" (UID: \"6b922772-403b-4302-b153-70a6c6443187\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.311967 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsnlz\" (UniqueName: \"kubernetes.io/projected/6b922772-403b-4302-b153-70a6c6443187-kube-api-access-nsnlz\") pod \"collect-profiles-29520945-7t2lq\" (UID: \"6b922772-403b-4302-b153-70a6c6443187\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.378908 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:45:00 crc kubenswrapper[4835]: E0216 15:45:00.379352 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.490968 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq" Feb 16 15:45:00 crc kubenswrapper[4835]: I0216 15:45:00.935473 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq"] Feb 16 15:45:01 crc kubenswrapper[4835]: I0216 15:45:01.755300 4835 generic.go:334] "Generic (PLEG): container finished" podID="6b922772-403b-4302-b153-70a6c6443187" containerID="8d4ba7971116117debccab7243e6db101c4be1f7b6c8639d831676c771482e4e" exitCode=0 Feb 16 15:45:01 crc kubenswrapper[4835]: I0216 15:45:01.755361 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq" event={"ID":"6b922772-403b-4302-b153-70a6c6443187","Type":"ContainerDied","Data":"8d4ba7971116117debccab7243e6db101c4be1f7b6c8639d831676c771482e4e"} Feb 16 15:45:01 crc kubenswrapper[4835]: I0216 15:45:01.755421 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq" event={"ID":"6b922772-403b-4302-b153-70a6c6443187","Type":"ContainerStarted","Data":"7ff145044902b9ec4ffd6b0044002651b22cec09ef564c566ac320afd41b878b"} Feb 16 15:45:03 crc kubenswrapper[4835]: I0216 15:45:03.192888 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq" Feb 16 15:45:03 crc kubenswrapper[4835]: I0216 15:45:03.264092 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b922772-403b-4302-b153-70a6c6443187-config-volume\") pod \"6b922772-403b-4302-b153-70a6c6443187\" (UID: \"6b922772-403b-4302-b153-70a6c6443187\") " Feb 16 15:45:03 crc kubenswrapper[4835]: I0216 15:45:03.264173 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b922772-403b-4302-b153-70a6c6443187-secret-volume\") pod \"6b922772-403b-4302-b153-70a6c6443187\" (UID: \"6b922772-403b-4302-b153-70a6c6443187\") " Feb 16 15:45:03 crc kubenswrapper[4835]: I0216 15:45:03.264244 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsnlz\" (UniqueName: \"kubernetes.io/projected/6b922772-403b-4302-b153-70a6c6443187-kube-api-access-nsnlz\") pod \"6b922772-403b-4302-b153-70a6c6443187\" (UID: \"6b922772-403b-4302-b153-70a6c6443187\") " Feb 16 15:45:03 crc kubenswrapper[4835]: I0216 15:45:03.265004 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b922772-403b-4302-b153-70a6c6443187-config-volume" (OuterVolumeSpecName: "config-volume") pod "6b922772-403b-4302-b153-70a6c6443187" (UID: "6b922772-403b-4302-b153-70a6c6443187"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:45:03 crc kubenswrapper[4835]: I0216 15:45:03.270002 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b922772-403b-4302-b153-70a6c6443187-kube-api-access-nsnlz" (OuterVolumeSpecName: "kube-api-access-nsnlz") pod "6b922772-403b-4302-b153-70a6c6443187" (UID: "6b922772-403b-4302-b153-70a6c6443187"). InnerVolumeSpecName "kube-api-access-nsnlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:45:03 crc kubenswrapper[4835]: I0216 15:45:03.271387 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b922772-403b-4302-b153-70a6c6443187-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6b922772-403b-4302-b153-70a6c6443187" (UID: "6b922772-403b-4302-b153-70a6c6443187"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:45:03 crc kubenswrapper[4835]: I0216 15:45:03.366759 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b922772-403b-4302-b153-70a6c6443187-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:45:03 crc kubenswrapper[4835]: I0216 15:45:03.366801 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsnlz\" (UniqueName: \"kubernetes.io/projected/6b922772-403b-4302-b153-70a6c6443187-kube-api-access-nsnlz\") on node \"crc\" DevicePath \"\"" Feb 16 15:45:03 crc kubenswrapper[4835]: I0216 15:45:03.366814 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b922772-403b-4302-b153-70a6c6443187-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:45:03 crc kubenswrapper[4835]: I0216 15:45:03.776077 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq" event={"ID":"6b922772-403b-4302-b153-70a6c6443187","Type":"ContainerDied","Data":"7ff145044902b9ec4ffd6b0044002651b22cec09ef564c566ac320afd41b878b"} Feb 16 15:45:03 crc kubenswrapper[4835]: I0216 15:45:03.776123 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ff145044902b9ec4ffd6b0044002651b22cec09ef564c566ac320afd41b878b" Feb 16 15:45:03 crc kubenswrapper[4835]: I0216 15:45:03.776132 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-7t2lq" Feb 16 15:45:04 crc kubenswrapper[4835]: I0216 15:45:04.284775 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh"] Feb 16 15:45:04 crc kubenswrapper[4835]: I0216 15:45:04.295861 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-7j6lh"] Feb 16 15:45:05 crc kubenswrapper[4835]: I0216 15:45:05.393394 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="569d3eef-2b86-44fb-90a1-2bceae4d2e09" path="/var/lib/kubelet/pods/569d3eef-2b86-44fb-90a1-2bceae4d2e09/volumes" Feb 16 15:45:10 crc kubenswrapper[4835]: E0216 15:45:10.381671 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:45:13 crc kubenswrapper[4835]: I0216 15:45:13.379513 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:45:13 crc kubenswrapper[4835]: E0216 15:45:13.382768 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:45:24 crc kubenswrapper[4835]: E0216 15:45:24.381464 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:45:28 crc kubenswrapper[4835]: I0216 15:45:28.379033 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:45:28 crc kubenswrapper[4835]: E0216 15:45:28.379631 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:45:36 crc kubenswrapper[4835]: I0216 15:45:36.478769 4835 scope.go:117] "RemoveContainer" containerID="a355cb939e11ca18ba891026d16bd4729e06b8f994c6d0417fba4ba1f4d02eb5" Feb 16 15:45:37 crc kubenswrapper[4835]: E0216 15:45:37.381333 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:45:39 crc kubenswrapper[4835]: I0216 15:45:39.379643 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:45:39 crc kubenswrapper[4835]: E0216 15:45:39.380146 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:45:52 crc kubenswrapper[4835]: E0216 15:45:52.383519 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:45:54 crc kubenswrapper[4835]: I0216 15:45:54.379144 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:45:54 crc kubenswrapper[4835]: E0216 15:45:54.379694 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:46:06 crc kubenswrapper[4835]: E0216 15:46:06.381391 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:46:08 crc kubenswrapper[4835]: I0216 15:46:08.378677 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:46:08 crc kubenswrapper[4835]: E0216 15:46:08.379307 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:46:18 crc kubenswrapper[4835]: E0216 15:46:18.381419 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:46:20 crc kubenswrapper[4835]: I0216 15:46:20.379116 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:46:20 crc kubenswrapper[4835]: E0216 15:46:20.379911 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:46:33 crc kubenswrapper[4835]: I0216 15:46:33.381957 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:46:33 crc kubenswrapper[4835]: E0216 15:46:33.496512 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:46:33 crc kubenswrapper[4835]: E0216 15:46:33.497010 4835 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:46:33 crc kubenswrapper[4835]: E0216 15:46:33.497306 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqdtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-sgzmb_openstack(3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:46:33 crc kubenswrapper[4835]: E0216 15:46:33.499027 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:46:34 crc kubenswrapper[4835]: I0216 15:46:34.380841 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:46:34 crc kubenswrapper[4835]: E0216 15:46:34.381351 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:46:44 crc kubenswrapper[4835]: E0216 15:46:44.381603 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:46:45 crc kubenswrapper[4835]: I0216 15:46:45.387234 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:46:45 crc kubenswrapper[4835]: E0216 15:46:45.387803 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:46:59 crc kubenswrapper[4835]: I0216 15:46:59.380622 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:46:59 crc kubenswrapper[4835]: E0216 15:46:59.381587 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:46:59 crc kubenswrapper[4835]: E0216 15:46:59.381776 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:47:12 crc kubenswrapper[4835]: I0216 15:47:12.378886 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:47:12 crc kubenswrapper[4835]: E0216 15:47:12.379699 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:47:13 crc kubenswrapper[4835]: E0216 15:47:13.382261 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:47:18 crc kubenswrapper[4835]: I0216 15:47:18.563837 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vthgg"] Feb 16 15:47:18 crc kubenswrapper[4835]: E0216 15:47:18.565920 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b922772-403b-4302-b153-70a6c6443187" containerName="collect-profiles" Feb 16 15:47:18 crc kubenswrapper[4835]: I0216 15:47:18.565962 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b922772-403b-4302-b153-70a6c6443187" containerName="collect-profiles" Feb 16 15:47:18 crc kubenswrapper[4835]: I0216 15:47:18.566502 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b922772-403b-4302-b153-70a6c6443187" containerName="collect-profiles" Feb 16 15:47:18 crc kubenswrapper[4835]: I0216 15:47:18.570194 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vthgg" Feb 16 15:47:18 crc kubenswrapper[4835]: I0216 15:47:18.580009 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-catalog-content\") pod \"redhat-marketplace-vthgg\" (UID: \"96f8d8c6-6a16-486c-b0cb-220a9a62ba95\") " pod="openshift-marketplace/redhat-marketplace-vthgg" Feb 16 15:47:18 crc kubenswrapper[4835]: I0216 15:47:18.580242 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-utilities\") pod \"redhat-marketplace-vthgg\" (UID: \"96f8d8c6-6a16-486c-b0cb-220a9a62ba95\") " pod="openshift-marketplace/redhat-marketplace-vthgg" Feb 16 15:47:18 crc kubenswrapper[4835]: I0216 15:47:18.580296 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdtns\" (UniqueName: \"kubernetes.io/projected/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-kube-api-access-pdtns\") pod \"redhat-marketplace-vthgg\" (UID: \"96f8d8c6-6a16-486c-b0cb-220a9a62ba95\") " pod="openshift-marketplace/redhat-marketplace-vthgg" Feb 16 15:47:18 crc kubenswrapper[4835]: I0216 15:47:18.606736 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vthgg"] Feb 16 15:47:18 crc kubenswrapper[4835]: I0216 15:47:18.682358 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-catalog-content\") pod \"redhat-marketplace-vthgg\" (UID: \"96f8d8c6-6a16-486c-b0cb-220a9a62ba95\") " pod="openshift-marketplace/redhat-marketplace-vthgg" Feb 16 15:47:18 crc kubenswrapper[4835]: I0216 15:47:18.682606 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-utilities\") pod \"redhat-marketplace-vthgg\" (UID: \"96f8d8c6-6a16-486c-b0cb-220a9a62ba95\") " pod="openshift-marketplace/redhat-marketplace-vthgg" Feb 16 15:47:18 crc kubenswrapper[4835]: I0216 15:47:18.682668 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdtns\" (UniqueName: \"kubernetes.io/projected/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-kube-api-access-pdtns\") pod \"redhat-marketplace-vthgg\" (UID: \"96f8d8c6-6a16-486c-b0cb-220a9a62ba95\") " pod="openshift-marketplace/redhat-marketplace-vthgg" Feb 16 15:47:18 crc kubenswrapper[4835]: I0216 15:47:18.682845 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-catalog-content\") pod \"redhat-marketplace-vthgg\" (UID: \"96f8d8c6-6a16-486c-b0cb-220a9a62ba95\") " pod="openshift-marketplace/redhat-marketplace-vthgg" Feb 16 15:47:18 crc kubenswrapper[4835]: I0216 15:47:18.683275 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-utilities\") pod \"redhat-marketplace-vthgg\" (UID: \"96f8d8c6-6a16-486c-b0cb-220a9a62ba95\") " pod="openshift-marketplace/redhat-marketplace-vthgg" Feb 16 15:47:18 crc kubenswrapper[4835]: I0216 15:47:18.704576 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdtns\" (UniqueName: \"kubernetes.io/projected/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-kube-api-access-pdtns\") pod \"redhat-marketplace-vthgg\" (UID: \"96f8d8c6-6a16-486c-b0cb-220a9a62ba95\") " pod="openshift-marketplace/redhat-marketplace-vthgg" Feb 16 15:47:18 crc kubenswrapper[4835]: I0216 15:47:18.900565 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vthgg" Feb 16 15:47:19 crc kubenswrapper[4835]: I0216 15:47:19.346406 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vthgg"] Feb 16 15:47:19 crc kubenswrapper[4835]: I0216 15:47:19.401000 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vthgg" event={"ID":"96f8d8c6-6a16-486c-b0cb-220a9a62ba95","Type":"ContainerStarted","Data":"e5e4c759760ad67ef0ed61631dbbd0ab5f58787057b493b301b2bd9d8a070cb3"} Feb 16 15:47:20 crc kubenswrapper[4835]: I0216 15:47:20.396585 4835 generic.go:334] "Generic (PLEG): container finished" podID="96f8d8c6-6a16-486c-b0cb-220a9a62ba95" containerID="45db7e33647caba12e33d73dbb7dffd7ee22825d0ad8346bfd30c7ef09e6ccc9" exitCode=0 Feb 16 15:47:20 crc kubenswrapper[4835]: I0216 15:47:20.396638 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vthgg" event={"ID":"96f8d8c6-6a16-486c-b0cb-220a9a62ba95","Type":"ContainerDied","Data":"45db7e33647caba12e33d73dbb7dffd7ee22825d0ad8346bfd30c7ef09e6ccc9"} Feb 16 15:47:21 crc kubenswrapper[4835]: I0216 15:47:21.410319 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vthgg" event={"ID":"96f8d8c6-6a16-486c-b0cb-220a9a62ba95","Type":"ContainerStarted","Data":"21290b7c43ed0e3a82cccd1b13b13dd6c527e2351f7921a4a6d79de6a4d4d826"} Feb 16 15:47:22 crc kubenswrapper[4835]: I0216 15:47:22.421556 4835 generic.go:334] "Generic (PLEG): container finished" podID="96f8d8c6-6a16-486c-b0cb-220a9a62ba95" containerID="21290b7c43ed0e3a82cccd1b13b13dd6c527e2351f7921a4a6d79de6a4d4d826" exitCode=0 Feb 16 15:47:22 crc kubenswrapper[4835]: I0216 15:47:22.421786 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vthgg" event={"ID":"96f8d8c6-6a16-486c-b0cb-220a9a62ba95","Type":"ContainerDied","Data":"21290b7c43ed0e3a82cccd1b13b13dd6c527e2351f7921a4a6d79de6a4d4d826"} Feb 16 15:47:23 crc kubenswrapper[4835]: I0216 15:47:23.434649 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vthgg" event={"ID":"96f8d8c6-6a16-486c-b0cb-220a9a62ba95","Type":"ContainerStarted","Data":"9ad1a49968514e1ed0968ca66743d12bcd895e680282ab73465b8d8cd6c195d8"} Feb 16 15:47:23 crc kubenswrapper[4835]: I0216 15:47:23.456348 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vthgg" podStartSLOduration=3.051299857 podStartE2EDuration="5.45632419s" podCreationTimestamp="2026-02-16 15:47:18 +0000 UTC" firstStartedPulling="2026-02-16 15:47:20.399163861 +0000 UTC m=+2389.691156776" lastFinishedPulling="2026-02-16 15:47:22.804188214 +0000 UTC m=+2392.096181109" observedRunningTime="2026-02-16 15:47:23.453435044 +0000 UTC m=+2392.745427939" watchObservedRunningTime="2026-02-16 15:47:23.45632419 +0000 UTC m=+2392.748317095" Feb 16 15:47:24 crc kubenswrapper[4835]: I0216 15:47:24.379417 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:47:24 crc kubenswrapper[4835]: E0216 15:47:24.380033 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:47:27 crc kubenswrapper[4835]: E0216 15:47:27.382189 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:47:28 crc kubenswrapper[4835]: I0216 15:47:28.900680 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vthgg" Feb 16 15:47:28 crc kubenswrapper[4835]: I0216 15:47:28.901020 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vthgg" Feb 16 15:47:28 crc kubenswrapper[4835]: I0216 15:47:28.972000 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vthgg" Feb 16 15:47:29 crc kubenswrapper[4835]: I0216 15:47:29.564678 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vthgg" Feb 16 15:47:33 crc kubenswrapper[4835]: I0216 15:47:33.346314 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vthgg"] Feb 16 15:47:33 crc kubenswrapper[4835]: I0216 15:47:33.346901 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vthgg" podUID="96f8d8c6-6a16-486c-b0cb-220a9a62ba95" containerName="registry-server" containerID="cri-o://9ad1a49968514e1ed0968ca66743d12bcd895e680282ab73465b8d8cd6c195d8" gracePeriod=2 Feb 16 15:47:33 crc kubenswrapper[4835]: I0216 15:47:33.552755 4835 generic.go:334] "Generic (PLEG): container finished" podID="96f8d8c6-6a16-486c-b0cb-220a9a62ba95" containerID="9ad1a49968514e1ed0968ca66743d12bcd895e680282ab73465b8d8cd6c195d8" exitCode=0 Feb 16 15:47:33 crc kubenswrapper[4835]: I0216 15:47:33.552799 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vthgg" event={"ID":"96f8d8c6-6a16-486c-b0cb-220a9a62ba95","Type":"ContainerDied","Data":"9ad1a49968514e1ed0968ca66743d12bcd895e680282ab73465b8d8cd6c195d8"} Feb 16 15:47:33 crc kubenswrapper[4835]: I0216 15:47:33.927092 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vthgg" Feb 16 15:47:34 crc kubenswrapper[4835]: I0216 15:47:34.111652 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-catalog-content\") pod \"96f8d8c6-6a16-486c-b0cb-220a9a62ba95\" (UID: \"96f8d8c6-6a16-486c-b0cb-220a9a62ba95\") " Feb 16 15:47:34 crc kubenswrapper[4835]: I0216 15:47:34.111770 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdtns\" (UniqueName: \"kubernetes.io/projected/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-kube-api-access-pdtns\") pod \"96f8d8c6-6a16-486c-b0cb-220a9a62ba95\" (UID: \"96f8d8c6-6a16-486c-b0cb-220a9a62ba95\") " Feb 16 15:47:34 crc kubenswrapper[4835]: I0216 15:47:34.111949 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-utilities\") pod \"96f8d8c6-6a16-486c-b0cb-220a9a62ba95\" (UID: \"96f8d8c6-6a16-486c-b0cb-220a9a62ba95\") " Feb 16 15:47:34 crc kubenswrapper[4835]: I0216 15:47:34.113599 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-utilities" (OuterVolumeSpecName: "utilities") pod "96f8d8c6-6a16-486c-b0cb-220a9a62ba95" (UID: "96f8d8c6-6a16-486c-b0cb-220a9a62ba95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:47:34 crc kubenswrapper[4835]: I0216 15:47:34.117267 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-kube-api-access-pdtns" (OuterVolumeSpecName: "kube-api-access-pdtns") pod "96f8d8c6-6a16-486c-b0cb-220a9a62ba95" (UID: "96f8d8c6-6a16-486c-b0cb-220a9a62ba95"). InnerVolumeSpecName "kube-api-access-pdtns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:47:34 crc kubenswrapper[4835]: I0216 15:47:34.148413 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96f8d8c6-6a16-486c-b0cb-220a9a62ba95" (UID: "96f8d8c6-6a16-486c-b0cb-220a9a62ba95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:47:34 crc kubenswrapper[4835]: I0216 15:47:34.215221 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:47:34 crc kubenswrapper[4835]: I0216 15:47:34.215281 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:47:34 crc kubenswrapper[4835]: I0216 15:47:34.215304 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdtns\" (UniqueName: \"kubernetes.io/projected/96f8d8c6-6a16-486c-b0cb-220a9a62ba95-kube-api-access-pdtns\") on node \"crc\" DevicePath \"\"" Feb 16 15:47:34 crc kubenswrapper[4835]: I0216 15:47:34.561785 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vthgg" event={"ID":"96f8d8c6-6a16-486c-b0cb-220a9a62ba95","Type":"ContainerDied","Data":"e5e4c759760ad67ef0ed61631dbbd0ab5f58787057b493b301b2bd9d8a070cb3"} Feb 16 15:47:34 crc kubenswrapper[4835]: I0216 15:47:34.562778 4835 scope.go:117] "RemoveContainer" containerID="9ad1a49968514e1ed0968ca66743d12bcd895e680282ab73465b8d8cd6c195d8" Feb 16 15:47:34 crc kubenswrapper[4835]: I0216 15:47:34.561871 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vthgg" Feb 16 15:47:34 crc kubenswrapper[4835]: I0216 15:47:34.581990 4835 scope.go:117] "RemoveContainer" containerID="21290b7c43ed0e3a82cccd1b13b13dd6c527e2351f7921a4a6d79de6a4d4d826" Feb 16 15:47:34 crc kubenswrapper[4835]: I0216 15:47:34.600022 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vthgg"] Feb 16 15:47:34 crc kubenswrapper[4835]: I0216 15:47:34.611830 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vthgg"] Feb 16 15:47:34 crc kubenswrapper[4835]: I0216 15:47:34.617157 4835 scope.go:117] "RemoveContainer" containerID="45db7e33647caba12e33d73dbb7dffd7ee22825d0ad8346bfd30c7ef09e6ccc9" Feb 16 15:47:35 crc kubenswrapper[4835]: I0216 15:47:35.398049 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96f8d8c6-6a16-486c-b0cb-220a9a62ba95" path="/var/lib/kubelet/pods/96f8d8c6-6a16-486c-b0cb-220a9a62ba95/volumes" Feb 16 15:47:37 crc kubenswrapper[4835]: I0216 15:47:37.379211 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:47:37 crc kubenswrapper[4835]: E0216 15:47:37.379747 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:47:40 crc kubenswrapper[4835]: E0216 15:47:40.380640 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:47:49 crc kubenswrapper[4835]: I0216 15:47:49.379080 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:47:49 crc kubenswrapper[4835]: E0216 15:47:49.379905 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:47:54 crc kubenswrapper[4835]: E0216 15:47:54.380639 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:48:03 crc kubenswrapper[4835]: I0216 15:48:03.379041 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:48:03 crc kubenswrapper[4835]: E0216 15:48:03.379895 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:48:06 crc kubenswrapper[4835]: E0216 15:48:06.383351 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:48:14 crc kubenswrapper[4835]: I0216 15:48:14.379157 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:48:14 crc kubenswrapper[4835]: E0216 15:48:14.379975 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:48:18 crc kubenswrapper[4835]: I0216 15:48:18.589202 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j6gg6"] Feb 16 15:48:18 crc kubenswrapper[4835]: E0216 15:48:18.590493 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f8d8c6-6a16-486c-b0cb-220a9a62ba95" containerName="extract-utilities" Feb 16 15:48:18 crc kubenswrapper[4835]: I0216 15:48:18.590568 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f8d8c6-6a16-486c-b0cb-220a9a62ba95" containerName="extract-utilities" Feb 16 15:48:18 crc kubenswrapper[4835]: E0216 15:48:18.590652 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f8d8c6-6a16-486c-b0cb-220a9a62ba95" containerName="registry-server" Feb 16 15:48:18 crc kubenswrapper[4835]: I0216 15:48:18.590673 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f8d8c6-6a16-486c-b0cb-220a9a62ba95" containerName="registry-server" Feb 16 15:48:18 crc kubenswrapper[4835]: E0216 15:48:18.590704 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f8d8c6-6a16-486c-b0cb-220a9a62ba95" containerName="extract-content" Feb 16 15:48:18 crc kubenswrapper[4835]: I0216 15:48:18.590724 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f8d8c6-6a16-486c-b0cb-220a9a62ba95" containerName="extract-content" Feb 16 15:48:18 crc kubenswrapper[4835]: I0216 15:48:18.591200 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="96f8d8c6-6a16-486c-b0cb-220a9a62ba95" containerName="registry-server" Feb 16 15:48:18 crc kubenswrapper[4835]: I0216 15:48:18.594729 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6gg6" Feb 16 15:48:18 crc kubenswrapper[4835]: I0216 15:48:18.600787 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j6gg6"] Feb 16 15:48:18 crc kubenswrapper[4835]: I0216 15:48:18.669016 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db283eb-a1d6-44bd-84b4-f184190a49ee-catalog-content\") pod \"certified-operators-j6gg6\" (UID: \"6db283eb-a1d6-44bd-84b4-f184190a49ee\") " pod="openshift-marketplace/certified-operators-j6gg6" Feb 16 15:48:18 crc kubenswrapper[4835]: I0216 15:48:18.669296 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jjsn\" (UniqueName: \"kubernetes.io/projected/6db283eb-a1d6-44bd-84b4-f184190a49ee-kube-api-access-6jjsn\") pod \"certified-operators-j6gg6\" (UID: \"6db283eb-a1d6-44bd-84b4-f184190a49ee\") " pod="openshift-marketplace/certified-operators-j6gg6" Feb 16 15:48:18 crc kubenswrapper[4835]: I0216 15:48:18.669396 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db283eb-a1d6-44bd-84b4-f184190a49ee-utilities\") pod \"certified-operators-j6gg6\" (UID: \"6db283eb-a1d6-44bd-84b4-f184190a49ee\") " pod="openshift-marketplace/certified-operators-j6gg6" Feb 16 15:48:18 crc kubenswrapper[4835]: I0216 15:48:18.771378 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db283eb-a1d6-44bd-84b4-f184190a49ee-catalog-content\") pod \"certified-operators-j6gg6\" (UID: \"6db283eb-a1d6-44bd-84b4-f184190a49ee\") " pod="openshift-marketplace/certified-operators-j6gg6" Feb 16 15:48:18 crc kubenswrapper[4835]: I0216 15:48:18.771544 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jjsn\" (UniqueName: \"kubernetes.io/projected/6db283eb-a1d6-44bd-84b4-f184190a49ee-kube-api-access-6jjsn\") pod \"certified-operators-j6gg6\" (UID: \"6db283eb-a1d6-44bd-84b4-f184190a49ee\") " pod="openshift-marketplace/certified-operators-j6gg6" Feb 16 15:48:18 crc kubenswrapper[4835]: I0216 15:48:18.771615 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db283eb-a1d6-44bd-84b4-f184190a49ee-utilities\") pod \"certified-operators-j6gg6\" (UID: \"6db283eb-a1d6-44bd-84b4-f184190a49ee\") " pod="openshift-marketplace/certified-operators-j6gg6" Feb 16 15:48:18 crc kubenswrapper[4835]: I0216 15:48:18.772133 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db283eb-a1d6-44bd-84b4-f184190a49ee-catalog-content\") pod \"certified-operators-j6gg6\" (UID: \"6db283eb-a1d6-44bd-84b4-f184190a49ee\") " pod="openshift-marketplace/certified-operators-j6gg6" Feb 16 15:48:18 crc kubenswrapper[4835]: I0216 15:48:18.772360 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db283eb-a1d6-44bd-84b4-f184190a49ee-utilities\") pod \"certified-operators-j6gg6\" (UID: \"6db283eb-a1d6-44bd-84b4-f184190a49ee\") " pod="openshift-marketplace/certified-operators-j6gg6" Feb 16 15:48:18 crc kubenswrapper[4835]: I0216 15:48:18.791525 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jjsn\" (UniqueName: \"kubernetes.io/projected/6db283eb-a1d6-44bd-84b4-f184190a49ee-kube-api-access-6jjsn\") pod \"certified-operators-j6gg6\" (UID: \"6db283eb-a1d6-44bd-84b4-f184190a49ee\") " pod="openshift-marketplace/certified-operators-j6gg6" Feb 16 15:48:18 crc kubenswrapper[4835]: I0216 15:48:18.925861 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6gg6" Feb 16 15:48:19 crc kubenswrapper[4835]: I0216 15:48:19.420058 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j6gg6"] Feb 16 15:48:20 crc kubenswrapper[4835]: I0216 15:48:20.097387 4835 generic.go:334] "Generic (PLEG): container finished" podID="6db283eb-a1d6-44bd-84b4-f184190a49ee" containerID="fa06ba1b9b21b164a601f74a961fee68d22b9328f4b6efd3f352cc0f90f82d53" exitCode=0 Feb 16 15:48:20 crc kubenswrapper[4835]: I0216 15:48:20.097457 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6gg6" event={"ID":"6db283eb-a1d6-44bd-84b4-f184190a49ee","Type":"ContainerDied","Data":"fa06ba1b9b21b164a601f74a961fee68d22b9328f4b6efd3f352cc0f90f82d53"} Feb 16 15:48:20 crc kubenswrapper[4835]: I0216 15:48:20.097663 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6gg6" event={"ID":"6db283eb-a1d6-44bd-84b4-f184190a49ee","Type":"ContainerStarted","Data":"d91d3dc35810ae40766cc712ec74b9b7f2a936a3818ee421d4749bfa09e8bb5f"} Feb 16 15:48:20 crc kubenswrapper[4835]: E0216 15:48:20.379883 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:48:21 crc kubenswrapper[4835]: I0216 15:48:21.107494 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6gg6" event={"ID":"6db283eb-a1d6-44bd-84b4-f184190a49ee","Type":"ContainerStarted","Data":"3bb1ccd83380c07f58c5cafd2fbefbde7ff7ad1a47f5e6abf6f15c8f10d23f3e"} Feb 16 15:48:22 crc kubenswrapper[4835]: I0216 15:48:22.121606 4835 generic.go:334] "Generic (PLEG): container finished" podID="6db283eb-a1d6-44bd-84b4-f184190a49ee" containerID="3bb1ccd83380c07f58c5cafd2fbefbde7ff7ad1a47f5e6abf6f15c8f10d23f3e" exitCode=0 Feb 16 15:48:22 crc kubenswrapper[4835]: I0216 15:48:22.121631 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6gg6" event={"ID":"6db283eb-a1d6-44bd-84b4-f184190a49ee","Type":"ContainerDied","Data":"3bb1ccd83380c07f58c5cafd2fbefbde7ff7ad1a47f5e6abf6f15c8f10d23f3e"} Feb 16 15:48:23 crc kubenswrapper[4835]: I0216 15:48:23.134293 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6gg6" event={"ID":"6db283eb-a1d6-44bd-84b4-f184190a49ee","Type":"ContainerStarted","Data":"4357a4573c6c551eff13ab0fc35b48bd361c658669518ef710d49a599942e38a"} Feb 16 15:48:23 crc kubenswrapper[4835]: I0216 15:48:23.164190 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j6gg6" podStartSLOduration=2.730657482 podStartE2EDuration="5.164170512s" podCreationTimestamp="2026-02-16 15:48:18 +0000 UTC" firstStartedPulling="2026-02-16 15:48:20.099177577 +0000 UTC m=+2449.391170472" lastFinishedPulling="2026-02-16 15:48:22.532690597 +0000 UTC m=+2451.824683502" observedRunningTime="2026-02-16 15:48:23.157731274 +0000 UTC m=+2452.449724189" watchObservedRunningTime="2026-02-16 15:48:23.164170512 +0000 UTC m=+2452.456163417" Feb 16 15:48:26 crc kubenswrapper[4835]: I0216 15:48:26.379345 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:48:26 crc kubenswrapper[4835]: E0216 15:48:26.380153 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:48:28 crc kubenswrapper[4835]: I0216 15:48:28.926622 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j6gg6" Feb 16 15:48:28 crc kubenswrapper[4835]: I0216 15:48:28.926924 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j6gg6" Feb 16 15:48:28 crc kubenswrapper[4835]: I0216 15:48:28.992083 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j6gg6" Feb 16 15:48:29 crc kubenswrapper[4835]: I0216 15:48:29.276426 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j6gg6" Feb 16 15:48:29 crc kubenswrapper[4835]: I0216 15:48:29.338090 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j6gg6"] Feb 16 15:48:31 crc kubenswrapper[4835]: I0216 15:48:31.215290 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j6gg6" podUID="6db283eb-a1d6-44bd-84b4-f184190a49ee" containerName="registry-server" containerID="cri-o://4357a4573c6c551eff13ab0fc35b48bd361c658669518ef710d49a599942e38a" gracePeriod=2 Feb 16 15:48:31 crc kubenswrapper[4835]: I0216 15:48:31.691880 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6gg6" Feb 16 15:48:31 crc kubenswrapper[4835]: I0216 15:48:31.801213 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jjsn\" (UniqueName: \"kubernetes.io/projected/6db283eb-a1d6-44bd-84b4-f184190a49ee-kube-api-access-6jjsn\") pod \"6db283eb-a1d6-44bd-84b4-f184190a49ee\" (UID: \"6db283eb-a1d6-44bd-84b4-f184190a49ee\") " Feb 16 15:48:31 crc kubenswrapper[4835]: I0216 15:48:31.801334 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db283eb-a1d6-44bd-84b4-f184190a49ee-catalog-content\") pod \"6db283eb-a1d6-44bd-84b4-f184190a49ee\" (UID: \"6db283eb-a1d6-44bd-84b4-f184190a49ee\") " Feb 16 15:48:31 crc kubenswrapper[4835]: I0216 15:48:31.801441 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db283eb-a1d6-44bd-84b4-f184190a49ee-utilities\") pod \"6db283eb-a1d6-44bd-84b4-f184190a49ee\" (UID: \"6db283eb-a1d6-44bd-84b4-f184190a49ee\") " Feb 16 15:48:31 crc kubenswrapper[4835]: I0216 15:48:31.802499 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6db283eb-a1d6-44bd-84b4-f184190a49ee-utilities" (OuterVolumeSpecName: "utilities") pod "6db283eb-a1d6-44bd-84b4-f184190a49ee" (UID: "6db283eb-a1d6-44bd-84b4-f184190a49ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:48:31 crc kubenswrapper[4835]: I0216 15:48:31.806275 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6db283eb-a1d6-44bd-84b4-f184190a49ee-kube-api-access-6jjsn" (OuterVolumeSpecName: "kube-api-access-6jjsn") pod "6db283eb-a1d6-44bd-84b4-f184190a49ee" (UID: "6db283eb-a1d6-44bd-84b4-f184190a49ee"). InnerVolumeSpecName "kube-api-access-6jjsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:48:31 crc kubenswrapper[4835]: I0216 15:48:31.903770 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jjsn\" (UniqueName: \"kubernetes.io/projected/6db283eb-a1d6-44bd-84b4-f184190a49ee-kube-api-access-6jjsn\") on node \"crc\" DevicePath \"\"" Feb 16 15:48:31 crc kubenswrapper[4835]: I0216 15:48:31.903799 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db283eb-a1d6-44bd-84b4-f184190a49ee-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:48:31 crc kubenswrapper[4835]: I0216 15:48:31.934646 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6db283eb-a1d6-44bd-84b4-f184190a49ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6db283eb-a1d6-44bd-84b4-f184190a49ee" (UID: "6db283eb-a1d6-44bd-84b4-f184190a49ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:48:32 crc kubenswrapper[4835]: I0216 15:48:32.005579 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db283eb-a1d6-44bd-84b4-f184190a49ee-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:48:32 crc kubenswrapper[4835]: I0216 15:48:32.227617 4835 generic.go:334] "Generic (PLEG): container finished" podID="6db283eb-a1d6-44bd-84b4-f184190a49ee" containerID="4357a4573c6c551eff13ab0fc35b48bd361c658669518ef710d49a599942e38a" exitCode=0 Feb 16 15:48:32 crc kubenswrapper[4835]: I0216 15:48:32.227701 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6gg6" event={"ID":"6db283eb-a1d6-44bd-84b4-f184190a49ee","Type":"ContainerDied","Data":"4357a4573c6c551eff13ab0fc35b48bd361c658669518ef710d49a599942e38a"} Feb 16 15:48:32 crc kubenswrapper[4835]: I0216 15:48:32.227757 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6gg6" event={"ID":"6db283eb-a1d6-44bd-84b4-f184190a49ee","Type":"ContainerDied","Data":"d91d3dc35810ae40766cc712ec74b9b7f2a936a3818ee421d4749bfa09e8bb5f"} Feb 16 15:48:32 crc kubenswrapper[4835]: I0216 15:48:32.227783 4835 scope.go:117] "RemoveContainer" containerID="4357a4573c6c551eff13ab0fc35b48bd361c658669518ef710d49a599942e38a" Feb 16 15:48:32 crc kubenswrapper[4835]: I0216 15:48:32.227955 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6gg6" Feb 16 15:48:32 crc kubenswrapper[4835]: I0216 15:48:32.255397 4835 scope.go:117] "RemoveContainer" containerID="3bb1ccd83380c07f58c5cafd2fbefbde7ff7ad1a47f5e6abf6f15c8f10d23f3e" Feb 16 15:48:32 crc kubenswrapper[4835]: I0216 15:48:32.274111 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j6gg6"] Feb 16 15:48:32 crc kubenswrapper[4835]: I0216 15:48:32.282564 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j6gg6"] Feb 16 15:48:32 crc kubenswrapper[4835]: I0216 15:48:32.305113 4835 scope.go:117] "RemoveContainer" containerID="fa06ba1b9b21b164a601f74a961fee68d22b9328f4b6efd3f352cc0f90f82d53" Feb 16 15:48:32 crc kubenswrapper[4835]: I0216 15:48:32.346772 4835 scope.go:117] "RemoveContainer" containerID="4357a4573c6c551eff13ab0fc35b48bd361c658669518ef710d49a599942e38a" Feb 16 15:48:32 crc kubenswrapper[4835]: E0216 15:48:32.347253 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4357a4573c6c551eff13ab0fc35b48bd361c658669518ef710d49a599942e38a\": container with ID starting with 4357a4573c6c551eff13ab0fc35b48bd361c658669518ef710d49a599942e38a not found: ID does not exist" containerID="4357a4573c6c551eff13ab0fc35b48bd361c658669518ef710d49a599942e38a" Feb 16 15:48:32 crc kubenswrapper[4835]: I0216 15:48:32.347308 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4357a4573c6c551eff13ab0fc35b48bd361c658669518ef710d49a599942e38a"} err="failed to get container status \"4357a4573c6c551eff13ab0fc35b48bd361c658669518ef710d49a599942e38a\": rpc error: code = NotFound desc = could not find container \"4357a4573c6c551eff13ab0fc35b48bd361c658669518ef710d49a599942e38a\": container with ID starting with 4357a4573c6c551eff13ab0fc35b48bd361c658669518ef710d49a599942e38a not found: ID does not exist" Feb 16 15:48:32 crc kubenswrapper[4835]: I0216 15:48:32.347334 4835 scope.go:117] "RemoveContainer" containerID="3bb1ccd83380c07f58c5cafd2fbefbde7ff7ad1a47f5e6abf6f15c8f10d23f3e" Feb 16 15:48:32 crc kubenswrapper[4835]: E0216 15:48:32.347673 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb1ccd83380c07f58c5cafd2fbefbde7ff7ad1a47f5e6abf6f15c8f10d23f3e\": container with ID starting with 3bb1ccd83380c07f58c5cafd2fbefbde7ff7ad1a47f5e6abf6f15c8f10d23f3e not found: ID does not exist" containerID="3bb1ccd83380c07f58c5cafd2fbefbde7ff7ad1a47f5e6abf6f15c8f10d23f3e" Feb 16 15:48:32 crc kubenswrapper[4835]: I0216 15:48:32.347715 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb1ccd83380c07f58c5cafd2fbefbde7ff7ad1a47f5e6abf6f15c8f10d23f3e"} err="failed to get container status \"3bb1ccd83380c07f58c5cafd2fbefbde7ff7ad1a47f5e6abf6f15c8f10d23f3e\": rpc error: code = NotFound desc = could not find container \"3bb1ccd83380c07f58c5cafd2fbefbde7ff7ad1a47f5e6abf6f15c8f10d23f3e\": container with ID starting with 3bb1ccd83380c07f58c5cafd2fbefbde7ff7ad1a47f5e6abf6f15c8f10d23f3e not found: ID does not exist" Feb 16 15:48:32 crc kubenswrapper[4835]: I0216 15:48:32.347744 4835 scope.go:117] "RemoveContainer" containerID="fa06ba1b9b21b164a601f74a961fee68d22b9328f4b6efd3f352cc0f90f82d53" Feb 16 15:48:32 crc kubenswrapper[4835]: E0216 15:48:32.348263 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa06ba1b9b21b164a601f74a961fee68d22b9328f4b6efd3f352cc0f90f82d53\": container with ID starting with fa06ba1b9b21b164a601f74a961fee68d22b9328f4b6efd3f352cc0f90f82d53 not found: ID does not exist" containerID="fa06ba1b9b21b164a601f74a961fee68d22b9328f4b6efd3f352cc0f90f82d53" Feb 16 15:48:32 crc kubenswrapper[4835]: I0216 15:48:32.348503 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa06ba1b9b21b164a601f74a961fee68d22b9328f4b6efd3f352cc0f90f82d53"} err="failed to get container status \"fa06ba1b9b21b164a601f74a961fee68d22b9328f4b6efd3f352cc0f90f82d53\": rpc error: code = NotFound desc = could not find container \"fa06ba1b9b21b164a601f74a961fee68d22b9328f4b6efd3f352cc0f90f82d53\": container with ID starting with fa06ba1b9b21b164a601f74a961fee68d22b9328f4b6efd3f352cc0f90f82d53 not found: ID does not exist" Feb 16 15:48:33 crc kubenswrapper[4835]: I0216 15:48:33.418295 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6db283eb-a1d6-44bd-84b4-f184190a49ee" path="/var/lib/kubelet/pods/6db283eb-a1d6-44bd-84b4-f184190a49ee/volumes" Feb 16 15:48:34 crc kubenswrapper[4835]: E0216 15:48:34.380343 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:48:37 crc kubenswrapper[4835]: I0216 15:48:37.378742 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:48:37 crc kubenswrapper[4835]: E0216 15:48:37.379104 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:48:47 crc kubenswrapper[4835]: E0216 15:48:47.382790 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:48:52 crc kubenswrapper[4835]: I0216 15:48:52.379552 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:48:52 crc kubenswrapper[4835]: E0216 15:48:52.380308 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:48:59 crc kubenswrapper[4835]: E0216 15:48:59.383096 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:49:07 crc kubenswrapper[4835]: I0216 15:49:07.379311 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:49:07 crc kubenswrapper[4835]: E0216 15:49:07.380456 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:49:13 crc kubenswrapper[4835]: E0216 15:49:13.380800 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:49:22 crc kubenswrapper[4835]: I0216 15:49:22.378415 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:49:22 crc kubenswrapper[4835]: E0216 15:49:22.379010 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:49:28 crc kubenswrapper[4835]: E0216 15:49:28.381778 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:49:36 crc kubenswrapper[4835]: I0216 15:49:36.379734 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:49:36 crc kubenswrapper[4835]: E0216 15:49:36.380472 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:49:39 crc kubenswrapper[4835]: E0216 15:49:39.382070 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:49:50 crc kubenswrapper[4835]: I0216 15:49:50.379688 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:49:51 crc kubenswrapper[4835]: I0216 15:49:51.284515 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerStarted","Data":"1ed5073fbefd2b2c9b0e060f08d45524b6c48bb83dc82a090687f31f0b53f4d6"} Feb 16 15:49:52 crc kubenswrapper[4835]: E0216 15:49:52.381213 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:49:56 crc kubenswrapper[4835]: I0216 15:49:56.565006 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6pmnl"] Feb 16 15:49:56 crc kubenswrapper[4835]: E0216 15:49:56.566940 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db283eb-a1d6-44bd-84b4-f184190a49ee" containerName="extract-content" Feb 16 15:49:56 crc kubenswrapper[4835]: I0216 15:49:56.566965 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db283eb-a1d6-44bd-84b4-f184190a49ee" containerName="extract-content" Feb 16 15:49:56 crc kubenswrapper[4835]: E0216 15:49:56.567001 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db283eb-a1d6-44bd-84b4-f184190a49ee" containerName="registry-server" Feb 16 15:49:56 crc kubenswrapper[4835]: I0216 15:49:56.567012 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db283eb-a1d6-44bd-84b4-f184190a49ee" containerName="registry-server" Feb 16 15:49:56 crc kubenswrapper[4835]: E0216 15:49:56.567043 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db283eb-a1d6-44bd-84b4-f184190a49ee" containerName="extract-utilities" Feb 16 15:49:56 crc kubenswrapper[4835]: I0216 15:49:56.567056 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db283eb-a1d6-44bd-84b4-f184190a49ee" containerName="extract-utilities" Feb 16 15:49:56 crc kubenswrapper[4835]: I0216 15:49:56.567378 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6db283eb-a1d6-44bd-84b4-f184190a49ee" containerName="registry-server" Feb 16 15:49:56 crc kubenswrapper[4835]: I0216 15:49:56.569521 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6pmnl" Feb 16 15:49:56 crc kubenswrapper[4835]: I0216 15:49:56.582086 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6pmnl"] Feb 16 15:49:56 crc kubenswrapper[4835]: I0216 15:49:56.727096 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcpgw\" (UniqueName: \"kubernetes.io/projected/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-kube-api-access-tcpgw\") pod \"community-operators-6pmnl\" (UID: \"2d4fb902-7bfe-4d2d-aa14-0502e638bee1\") " pod="openshift-marketplace/community-operators-6pmnl" Feb 16 15:49:56 crc kubenswrapper[4835]: I0216 15:49:56.727401 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-catalog-content\") pod \"community-operators-6pmnl\" (UID: \"2d4fb902-7bfe-4d2d-aa14-0502e638bee1\") " pod="openshift-marketplace/community-operators-6pmnl" Feb 16 15:49:56 crc kubenswrapper[4835]: I0216 15:49:56.727980 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-utilities\") pod \"community-operators-6pmnl\" (UID: \"2d4fb902-7bfe-4d2d-aa14-0502e638bee1\") " pod="openshift-marketplace/community-operators-6pmnl" Feb 16 15:49:56 crc kubenswrapper[4835]: I0216 15:49:56.829871 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-utilities\") pod \"community-operators-6pmnl\" (UID: \"2d4fb902-7bfe-4d2d-aa14-0502e638bee1\") " pod="openshift-marketplace/community-operators-6pmnl" Feb 16 15:49:56 crc kubenswrapper[4835]: I0216 15:49:56.829954 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcpgw\" (UniqueName: \"kubernetes.io/projected/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-kube-api-access-tcpgw\") pod \"community-operators-6pmnl\" (UID: \"2d4fb902-7bfe-4d2d-aa14-0502e638bee1\") " pod="openshift-marketplace/community-operators-6pmnl" Feb 16 15:49:56 crc kubenswrapper[4835]: I0216 15:49:56.830023 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-catalog-content\") pod \"community-operators-6pmnl\" (UID: \"2d4fb902-7bfe-4d2d-aa14-0502e638bee1\") " pod="openshift-marketplace/community-operators-6pmnl" Feb 16 15:49:56 crc kubenswrapper[4835]: I0216 15:49:56.830582 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-catalog-content\") pod \"community-operators-6pmnl\" (UID: \"2d4fb902-7bfe-4d2d-aa14-0502e638bee1\") " pod="openshift-marketplace/community-operators-6pmnl" Feb 16 15:49:56 crc kubenswrapper[4835]: I0216 15:49:56.833682 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-utilities\") pod \"community-operators-6pmnl\" (UID: \"2d4fb902-7bfe-4d2d-aa14-0502e638bee1\") " pod="openshift-marketplace/community-operators-6pmnl" Feb 16 15:49:56 crc kubenswrapper[4835]: I0216 15:49:56.852364 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcpgw\" (UniqueName: \"kubernetes.io/projected/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-kube-api-access-tcpgw\") pod \"community-operators-6pmnl\" (UID: \"2d4fb902-7bfe-4d2d-aa14-0502e638bee1\") " pod="openshift-marketplace/community-operators-6pmnl" Feb 16 15:49:56 crc kubenswrapper[4835]: I0216 15:49:56.909684 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6pmnl" Feb 16 15:49:57 crc kubenswrapper[4835]: I0216 15:49:57.436825 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6pmnl"] Feb 16 15:49:57 crc kubenswrapper[4835]: W0216 15:49:57.441090 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d4fb902_7bfe_4d2d_aa14_0502e638bee1.slice/crio-01b6994618929c702b8d85d7d10272c438b9d6b5adb1bc608a2982d4020b0e74 WatchSource:0}: Error finding container 01b6994618929c702b8d85d7d10272c438b9d6b5adb1bc608a2982d4020b0e74: Status 404 returned error can't find the container with id 01b6994618929c702b8d85d7d10272c438b9d6b5adb1bc608a2982d4020b0e74 Feb 16 15:49:58 crc kubenswrapper[4835]: I0216 15:49:58.370253 4835 generic.go:334] "Generic (PLEG): container finished" podID="2d4fb902-7bfe-4d2d-aa14-0502e638bee1" containerID="d2f3f6f10e0616ea3388b71873bfa6696ea4184b4b9024bc6e98c89180cc0756" exitCode=0 Feb 16 15:49:58 crc kubenswrapper[4835]: I0216 15:49:58.370694 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6pmnl" event={"ID":"2d4fb902-7bfe-4d2d-aa14-0502e638bee1","Type":"ContainerDied","Data":"d2f3f6f10e0616ea3388b71873bfa6696ea4184b4b9024bc6e98c89180cc0756"} Feb 16 15:49:58 crc kubenswrapper[4835]: I0216 15:49:58.370729 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6pmnl" event={"ID":"2d4fb902-7bfe-4d2d-aa14-0502e638bee1","Type":"ContainerStarted","Data":"01b6994618929c702b8d85d7d10272c438b9d6b5adb1bc608a2982d4020b0e74"} Feb 16 15:49:59 crc kubenswrapper[4835]: I0216 15:49:59.392387 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6pmnl" event={"ID":"2d4fb902-7bfe-4d2d-aa14-0502e638bee1","Type":"ContainerStarted","Data":"4a76d0c699f627263e92a9df8b94ac7ff46486b0456b384590284e8e0adb3f67"} Feb 16 15:50:00 crc kubenswrapper[4835]: I0216 15:50:00.399780 4835 generic.go:334] "Generic (PLEG): container finished" podID="2d4fb902-7bfe-4d2d-aa14-0502e638bee1" containerID="4a76d0c699f627263e92a9df8b94ac7ff46486b0456b384590284e8e0adb3f67" exitCode=0 Feb 16 15:50:00 crc kubenswrapper[4835]: I0216 15:50:00.399880 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6pmnl" event={"ID":"2d4fb902-7bfe-4d2d-aa14-0502e638bee1","Type":"ContainerDied","Data":"4a76d0c699f627263e92a9df8b94ac7ff46486b0456b384590284e8e0adb3f67"} Feb 16 15:50:01 crc kubenswrapper[4835]: I0216 15:50:01.411503 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6pmnl" event={"ID":"2d4fb902-7bfe-4d2d-aa14-0502e638bee1","Type":"ContainerStarted","Data":"88e8d63adccb71179495e39bd2e0600f35eb59d072c7165fc710744eddc347cc"} Feb 16 15:50:01 crc kubenswrapper[4835]: I0216 15:50:01.432501 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6pmnl" podStartSLOduration=2.865025551 podStartE2EDuration="5.43247592s" podCreationTimestamp="2026-02-16 15:49:56 +0000 UTC" firstStartedPulling="2026-02-16 15:49:58.385791684 +0000 UTC m=+2547.677784579" lastFinishedPulling="2026-02-16 15:50:00.953242053 +0000 UTC m=+2550.245234948" observedRunningTime="2026-02-16 15:50:01.427559871 +0000 UTC m=+2550.719552776" watchObservedRunningTime="2026-02-16 15:50:01.43247592 +0000 UTC m=+2550.724468835" Feb 16 15:50:06 crc kubenswrapper[4835]: I0216 15:50:06.911162 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6pmnl" Feb 16 15:50:06 crc kubenswrapper[4835]: I0216 15:50:06.911723 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6pmnl" Feb 16 15:50:06 crc kubenswrapper[4835]: I0216 15:50:06.955322 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6pmnl" Feb 16 15:50:07 crc kubenswrapper[4835]: E0216 15:50:07.381828 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:50:07 crc kubenswrapper[4835]: I0216 15:50:07.546912 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6pmnl" Feb 16 15:50:07 crc kubenswrapper[4835]: I0216 15:50:07.614393 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6pmnl"] Feb 16 15:50:09 crc kubenswrapper[4835]: I0216 15:50:09.509629 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6pmnl" podUID="2d4fb902-7bfe-4d2d-aa14-0502e638bee1" containerName="registry-server" containerID="cri-o://88e8d63adccb71179495e39bd2e0600f35eb59d072c7165fc710744eddc347cc" gracePeriod=2 Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.012671 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6pmnl" Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.131882 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcpgw\" (UniqueName: \"kubernetes.io/projected/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-kube-api-access-tcpgw\") pod \"2d4fb902-7bfe-4d2d-aa14-0502e638bee1\" (UID: \"2d4fb902-7bfe-4d2d-aa14-0502e638bee1\") " Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.132006 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-utilities\") pod \"2d4fb902-7bfe-4d2d-aa14-0502e638bee1\" (UID: \"2d4fb902-7bfe-4d2d-aa14-0502e638bee1\") " Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.132087 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-catalog-content\") pod \"2d4fb902-7bfe-4d2d-aa14-0502e638bee1\" (UID: \"2d4fb902-7bfe-4d2d-aa14-0502e638bee1\") " Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.133094 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-utilities" (OuterVolumeSpecName: "utilities") pod "2d4fb902-7bfe-4d2d-aa14-0502e638bee1" (UID: "2d4fb902-7bfe-4d2d-aa14-0502e638bee1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.138765 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-kube-api-access-tcpgw" (OuterVolumeSpecName: "kube-api-access-tcpgw") pod "2d4fb902-7bfe-4d2d-aa14-0502e638bee1" (UID: "2d4fb902-7bfe-4d2d-aa14-0502e638bee1"). InnerVolumeSpecName "kube-api-access-tcpgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.185564 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d4fb902-7bfe-4d2d-aa14-0502e638bee1" (UID: "2d4fb902-7bfe-4d2d-aa14-0502e638bee1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.234946 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.235503 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.235522 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcpgw\" (UniqueName: \"kubernetes.io/projected/2d4fb902-7bfe-4d2d-aa14-0502e638bee1-kube-api-access-tcpgw\") on node \"crc\" DevicePath \"\"" Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.520966 4835 generic.go:334] "Generic (PLEG): container finished" podID="2d4fb902-7bfe-4d2d-aa14-0502e638bee1" containerID="88e8d63adccb71179495e39bd2e0600f35eb59d072c7165fc710744eddc347cc" exitCode=0 Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.521020 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6pmnl" event={"ID":"2d4fb902-7bfe-4d2d-aa14-0502e638bee1","Type":"ContainerDied","Data":"88e8d63adccb71179495e39bd2e0600f35eb59d072c7165fc710744eddc347cc"} Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.521035 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6pmnl" Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.521057 4835 scope.go:117] "RemoveContainer" containerID="88e8d63adccb71179495e39bd2e0600f35eb59d072c7165fc710744eddc347cc" Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.521046 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6pmnl" event={"ID":"2d4fb902-7bfe-4d2d-aa14-0502e638bee1","Type":"ContainerDied","Data":"01b6994618929c702b8d85d7d10272c438b9d6b5adb1bc608a2982d4020b0e74"} Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.554306 4835 scope.go:117] "RemoveContainer" containerID="4a76d0c699f627263e92a9df8b94ac7ff46486b0456b384590284e8e0adb3f67" Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.557285 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6pmnl"] Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.566785 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6pmnl"] Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.577195 4835 scope.go:117] "RemoveContainer" containerID="d2f3f6f10e0616ea3388b71873bfa6696ea4184b4b9024bc6e98c89180cc0756" Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.616093 4835 scope.go:117] "RemoveContainer" containerID="88e8d63adccb71179495e39bd2e0600f35eb59d072c7165fc710744eddc347cc" Feb 16 15:50:10 crc kubenswrapper[4835]: E0216 15:50:10.616438 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88e8d63adccb71179495e39bd2e0600f35eb59d072c7165fc710744eddc347cc\": container with ID starting with 88e8d63adccb71179495e39bd2e0600f35eb59d072c7165fc710744eddc347cc not found: ID does not exist" containerID="88e8d63adccb71179495e39bd2e0600f35eb59d072c7165fc710744eddc347cc" Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.616482 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88e8d63adccb71179495e39bd2e0600f35eb59d072c7165fc710744eddc347cc"} err="failed to get container status \"88e8d63adccb71179495e39bd2e0600f35eb59d072c7165fc710744eddc347cc\": rpc error: code = NotFound desc = could not find container \"88e8d63adccb71179495e39bd2e0600f35eb59d072c7165fc710744eddc347cc\": container with ID starting with 88e8d63adccb71179495e39bd2e0600f35eb59d072c7165fc710744eddc347cc not found: ID does not exist" Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.616508 4835 scope.go:117] "RemoveContainer" containerID="4a76d0c699f627263e92a9df8b94ac7ff46486b0456b384590284e8e0adb3f67" Feb 16 15:50:10 crc kubenswrapper[4835]: E0216 15:50:10.616930 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a76d0c699f627263e92a9df8b94ac7ff46486b0456b384590284e8e0adb3f67\": container with ID starting with 4a76d0c699f627263e92a9df8b94ac7ff46486b0456b384590284e8e0adb3f67 not found: ID does not exist" containerID="4a76d0c699f627263e92a9df8b94ac7ff46486b0456b384590284e8e0adb3f67" Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.616957 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a76d0c699f627263e92a9df8b94ac7ff46486b0456b384590284e8e0adb3f67"} err="failed to get container status \"4a76d0c699f627263e92a9df8b94ac7ff46486b0456b384590284e8e0adb3f67\": rpc error: code = NotFound desc = could not find container \"4a76d0c699f627263e92a9df8b94ac7ff46486b0456b384590284e8e0adb3f67\": container with ID starting with 4a76d0c699f627263e92a9df8b94ac7ff46486b0456b384590284e8e0adb3f67 not found: ID does not exist" Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.616981 4835 scope.go:117] "RemoveContainer" containerID="d2f3f6f10e0616ea3388b71873bfa6696ea4184b4b9024bc6e98c89180cc0756" Feb 16 15:50:10 crc kubenswrapper[4835]: E0216 15:50:10.617362 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2f3f6f10e0616ea3388b71873bfa6696ea4184b4b9024bc6e98c89180cc0756\": container with ID starting with d2f3f6f10e0616ea3388b71873bfa6696ea4184b4b9024bc6e98c89180cc0756 not found: ID does not exist" containerID="d2f3f6f10e0616ea3388b71873bfa6696ea4184b4b9024bc6e98c89180cc0756" Feb 16 15:50:10 crc kubenswrapper[4835]: I0216 15:50:10.617403 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2f3f6f10e0616ea3388b71873bfa6696ea4184b4b9024bc6e98c89180cc0756"} err="failed to get container status \"d2f3f6f10e0616ea3388b71873bfa6696ea4184b4b9024bc6e98c89180cc0756\": rpc error: code = NotFound desc = could not find container \"d2f3f6f10e0616ea3388b71873bfa6696ea4184b4b9024bc6e98c89180cc0756\": container with ID starting with d2f3f6f10e0616ea3388b71873bfa6696ea4184b4b9024bc6e98c89180cc0756 not found: ID does not exist" Feb 16 15:50:11 crc kubenswrapper[4835]: I0216 15:50:11.392711 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d4fb902-7bfe-4d2d-aa14-0502e638bee1" path="/var/lib/kubelet/pods/2d4fb902-7bfe-4d2d-aa14-0502e638bee1/volumes" Feb 16 15:50:18 crc kubenswrapper[4835]: E0216 15:50:18.381518 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:50:32 crc kubenswrapper[4835]: E0216 15:50:32.380818 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:50:43 crc kubenswrapper[4835]: E0216 15:50:43.382501 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:50:54 crc kubenswrapper[4835]: E0216 15:50:54.382781 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:51:05 crc kubenswrapper[4835]: E0216 15:51:05.381191 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:51:17 crc kubenswrapper[4835]: E0216 15:51:17.383515 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:51:32 crc kubenswrapper[4835]: E0216 15:51:32.380618 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:51:36 crc kubenswrapper[4835]: I0216 15:51:36.230419 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vb8vj"] Feb 16 15:51:36 crc kubenswrapper[4835]: E0216 15:51:36.231572 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4fb902-7bfe-4d2d-aa14-0502e638bee1" containerName="extract-content" Feb 16 15:51:36 crc kubenswrapper[4835]: I0216 15:51:36.231600 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4fb902-7bfe-4d2d-aa14-0502e638bee1" containerName="extract-content" Feb 16 15:51:36 crc kubenswrapper[4835]: E0216 15:51:36.231610 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4fb902-7bfe-4d2d-aa14-0502e638bee1" containerName="extract-utilities" Feb 16 15:51:36 crc kubenswrapper[4835]: I0216 15:51:36.231617 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4fb902-7bfe-4d2d-aa14-0502e638bee1" containerName="extract-utilities" Feb 16 15:51:36 crc kubenswrapper[4835]: E0216 15:51:36.231642 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4fb902-7bfe-4d2d-aa14-0502e638bee1" containerName="registry-server" Feb 16 15:51:36 crc kubenswrapper[4835]: I0216 15:51:36.231651 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4fb902-7bfe-4d2d-aa14-0502e638bee1" containerName="registry-server" Feb 16 15:51:36 crc kubenswrapper[4835]: I0216 15:51:36.231968 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d4fb902-7bfe-4d2d-aa14-0502e638bee1" containerName="registry-server" Feb 16 15:51:36 crc kubenswrapper[4835]: I0216 15:51:36.234575 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vb8vj" Feb 16 15:51:36 crc kubenswrapper[4835]: I0216 15:51:36.239059 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vb8vj"] Feb 16 15:51:36 crc kubenswrapper[4835]: I0216 15:51:36.379757 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cvpk\" (UniqueName: \"kubernetes.io/projected/0d8d877f-1819-4832-b7bd-690c62ddb4b3-kube-api-access-6cvpk\") pod \"redhat-operators-vb8vj\" (UID: \"0d8d877f-1819-4832-b7bd-690c62ddb4b3\") " pod="openshift-marketplace/redhat-operators-vb8vj" Feb 16 15:51:36 crc kubenswrapper[4835]: I0216 15:51:36.379865 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d8d877f-1819-4832-b7bd-690c62ddb4b3-utilities\") pod \"redhat-operators-vb8vj\" (UID: \"0d8d877f-1819-4832-b7bd-690c62ddb4b3\") " pod="openshift-marketplace/redhat-operators-vb8vj" Feb 16 15:51:36 crc kubenswrapper[4835]: I0216 15:51:36.379957 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d8d877f-1819-4832-b7bd-690c62ddb4b3-catalog-content\") pod \"redhat-operators-vb8vj\" (UID: \"0d8d877f-1819-4832-b7bd-690c62ddb4b3\") " pod="openshift-marketplace/redhat-operators-vb8vj" Feb 16 15:51:36 crc kubenswrapper[4835]: I0216 15:51:36.482189 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cvpk\" (UniqueName: \"kubernetes.io/projected/0d8d877f-1819-4832-b7bd-690c62ddb4b3-kube-api-access-6cvpk\") pod \"redhat-operators-vb8vj\" (UID: \"0d8d877f-1819-4832-b7bd-690c62ddb4b3\") " pod="openshift-marketplace/redhat-operators-vb8vj" Feb 16 15:51:36 crc kubenswrapper[4835]: I0216 15:51:36.482703 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d8d877f-1819-4832-b7bd-690c62ddb4b3-utilities\") pod \"redhat-operators-vb8vj\" (UID: \"0d8d877f-1819-4832-b7bd-690c62ddb4b3\") " pod="openshift-marketplace/redhat-operators-vb8vj" Feb 16 15:51:36 crc kubenswrapper[4835]: I0216 15:51:36.482765 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d8d877f-1819-4832-b7bd-690c62ddb4b3-catalog-content\") pod \"redhat-operators-vb8vj\" (UID: \"0d8d877f-1819-4832-b7bd-690c62ddb4b3\") " pod="openshift-marketplace/redhat-operators-vb8vj" Feb 16 15:51:36 crc kubenswrapper[4835]: I0216 15:51:36.483206 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d8d877f-1819-4832-b7bd-690c62ddb4b3-catalog-content\") pod \"redhat-operators-vb8vj\" (UID: \"0d8d877f-1819-4832-b7bd-690c62ddb4b3\") " pod="openshift-marketplace/redhat-operators-vb8vj" Feb 16 15:51:36 crc kubenswrapper[4835]: I0216 15:51:36.483263 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d8d877f-1819-4832-b7bd-690c62ddb4b3-utilities\") pod \"redhat-operators-vb8vj\" (UID: \"0d8d877f-1819-4832-b7bd-690c62ddb4b3\") " pod="openshift-marketplace/redhat-operators-vb8vj" Feb 16 15:51:36 crc kubenswrapper[4835]: I0216 15:51:36.503633 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cvpk\" (UniqueName: \"kubernetes.io/projected/0d8d877f-1819-4832-b7bd-690c62ddb4b3-kube-api-access-6cvpk\") pod \"redhat-operators-vb8vj\" (UID: \"0d8d877f-1819-4832-b7bd-690c62ddb4b3\") " pod="openshift-marketplace/redhat-operators-vb8vj" Feb 16 15:51:36 crc kubenswrapper[4835]: I0216 15:51:36.561857 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vb8vj" Feb 16 15:51:37 crc kubenswrapper[4835]: I0216 15:51:37.006032 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vb8vj"] Feb 16 15:51:37 crc kubenswrapper[4835]: I0216 15:51:37.340844 4835 generic.go:334] "Generic (PLEG): container finished" podID="0d8d877f-1819-4832-b7bd-690c62ddb4b3" containerID="f83b8c3f76e750d1abcae78eb941cf549f3eea46fa523eb23a7b1a6a6eb4d216" exitCode=0 Feb 16 15:51:37 crc kubenswrapper[4835]: I0216 15:51:37.340904 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vb8vj" event={"ID":"0d8d877f-1819-4832-b7bd-690c62ddb4b3","Type":"ContainerDied","Data":"f83b8c3f76e750d1abcae78eb941cf549f3eea46fa523eb23a7b1a6a6eb4d216"} Feb 16 15:51:37 crc kubenswrapper[4835]: I0216 15:51:37.341057 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vb8vj" event={"ID":"0d8d877f-1819-4832-b7bd-690c62ddb4b3","Type":"ContainerStarted","Data":"6a413acdfed4c02b0f1b95ac4d7db3f47d3cfb8fb933ddc4869c0330b8780f43"} Feb 16 15:51:37 crc kubenswrapper[4835]: I0216 15:51:37.343230 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:51:39 crc kubenswrapper[4835]: I0216 15:51:39.360998 4835 generic.go:334] "Generic (PLEG): container finished" podID="0d8d877f-1819-4832-b7bd-690c62ddb4b3" containerID="877fe7853cc705a5e5784aeb3f82305eb5fc811d61733747a672aaf8c24b06d4" exitCode=0 Feb 16 15:51:39 crc kubenswrapper[4835]: I0216 15:51:39.361088 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vb8vj" event={"ID":"0d8d877f-1819-4832-b7bd-690c62ddb4b3","Type":"ContainerDied","Data":"877fe7853cc705a5e5784aeb3f82305eb5fc811d61733747a672aaf8c24b06d4"} Feb 16 15:51:42 crc kubenswrapper[4835]: I0216 15:51:42.396754 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vb8vj" event={"ID":"0d8d877f-1819-4832-b7bd-690c62ddb4b3","Type":"ContainerStarted","Data":"ce7f11697148108bfce134226799fc929848fbda52cd577e7ffbfcdcfde0a771"} Feb 16 15:51:43 crc kubenswrapper[4835]: I0216 15:51:43.430419 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vb8vj" podStartSLOduration=3.930223636 podStartE2EDuration="7.430403134s" podCreationTimestamp="2026-02-16 15:51:36 +0000 UTC" firstStartedPulling="2026-02-16 15:51:37.343012069 +0000 UTC m=+2646.635004964" lastFinishedPulling="2026-02-16 15:51:40.843191527 +0000 UTC m=+2650.135184462" observedRunningTime="2026-02-16 15:51:43.424376906 +0000 UTC m=+2652.716369801" watchObservedRunningTime="2026-02-16 15:51:43.430403134 +0000 UTC m=+2652.722396029" Feb 16 15:51:45 crc kubenswrapper[4835]: E0216 15:51:45.508513 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:51:45 crc kubenswrapper[4835]: E0216 15:51:45.508784 4835 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:51:45 crc kubenswrapper[4835]: E0216 15:51:45.508918 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqdtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-sgzmb_openstack(3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:51:45 crc kubenswrapper[4835]: E0216 15:51:45.510170 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:51:46 crc kubenswrapper[4835]: I0216 15:51:46.562754 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vb8vj" Feb 16 15:51:46 crc kubenswrapper[4835]: I0216 15:51:46.563344 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vb8vj" Feb 16 15:51:47 crc kubenswrapper[4835]: I0216 15:51:47.608586 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vb8vj" podUID="0d8d877f-1819-4832-b7bd-690c62ddb4b3" containerName="registry-server" probeResult="failure" output=< Feb 16 15:51:47 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Feb 16 15:51:47 crc kubenswrapper[4835]: > Feb 16 15:51:56 crc kubenswrapper[4835]: I0216 15:51:56.609924 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vb8vj" Feb 16 15:51:56 crc kubenswrapper[4835]: I0216 15:51:56.665236 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vb8vj" Feb 16 15:51:56 crc kubenswrapper[4835]: I0216 15:51:56.853558 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vb8vj"] Feb 16 15:51:58 crc kubenswrapper[4835]: I0216 15:51:58.551345 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vb8vj" podUID="0d8d877f-1819-4832-b7bd-690c62ddb4b3" containerName="registry-server" containerID="cri-o://ce7f11697148108bfce134226799fc929848fbda52cd577e7ffbfcdcfde0a771" gracePeriod=2 Feb 16 15:51:58 crc kubenswrapper[4835]: E0216 15:51:58.749363 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d8d877f_1819_4832_b7bd_690c62ddb4b3.slice/crio-conmon-ce7f11697148108bfce134226799fc929848fbda52cd577e7ffbfcdcfde0a771.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.031446 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vb8vj" Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.076422 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cvpk\" (UniqueName: \"kubernetes.io/projected/0d8d877f-1819-4832-b7bd-690c62ddb4b3-kube-api-access-6cvpk\") pod \"0d8d877f-1819-4832-b7bd-690c62ddb4b3\" (UID: \"0d8d877f-1819-4832-b7bd-690c62ddb4b3\") " Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.076590 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d8d877f-1819-4832-b7bd-690c62ddb4b3-utilities\") pod \"0d8d877f-1819-4832-b7bd-690c62ddb4b3\" (UID: \"0d8d877f-1819-4832-b7bd-690c62ddb4b3\") " Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.076644 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d8d877f-1819-4832-b7bd-690c62ddb4b3-catalog-content\") pod \"0d8d877f-1819-4832-b7bd-690c62ddb4b3\" (UID: \"0d8d877f-1819-4832-b7bd-690c62ddb4b3\") " Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.077469 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d8d877f-1819-4832-b7bd-690c62ddb4b3-utilities" (OuterVolumeSpecName: "utilities") pod "0d8d877f-1819-4832-b7bd-690c62ddb4b3" (UID: "0d8d877f-1819-4832-b7bd-690c62ddb4b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.082646 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d8d877f-1819-4832-b7bd-690c62ddb4b3-kube-api-access-6cvpk" (OuterVolumeSpecName: "kube-api-access-6cvpk") pod "0d8d877f-1819-4832-b7bd-690c62ddb4b3" (UID: "0d8d877f-1819-4832-b7bd-690c62ddb4b3"). InnerVolumeSpecName "kube-api-access-6cvpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.179065 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cvpk\" (UniqueName: \"kubernetes.io/projected/0d8d877f-1819-4832-b7bd-690c62ddb4b3-kube-api-access-6cvpk\") on node \"crc\" DevicePath \"\"" Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.179096 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d8d877f-1819-4832-b7bd-690c62ddb4b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.200671 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d8d877f-1819-4832-b7bd-690c62ddb4b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d8d877f-1819-4832-b7bd-690c62ddb4b3" (UID: "0d8d877f-1819-4832-b7bd-690c62ddb4b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.280924 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d8d877f-1819-4832-b7bd-690c62ddb4b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:51:59 crc kubenswrapper[4835]: E0216 15:51:59.380470 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.561265 4835 generic.go:334] "Generic (PLEG): container finished" podID="0d8d877f-1819-4832-b7bd-690c62ddb4b3" containerID="ce7f11697148108bfce134226799fc929848fbda52cd577e7ffbfcdcfde0a771" exitCode=0 Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.561361 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vb8vj" Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.561357 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vb8vj" event={"ID":"0d8d877f-1819-4832-b7bd-690c62ddb4b3","Type":"ContainerDied","Data":"ce7f11697148108bfce134226799fc929848fbda52cd577e7ffbfcdcfde0a771"} Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.561803 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vb8vj" event={"ID":"0d8d877f-1819-4832-b7bd-690c62ddb4b3","Type":"ContainerDied","Data":"6a413acdfed4c02b0f1b95ac4d7db3f47d3cfb8fb933ddc4869c0330b8780f43"} Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.561822 4835 scope.go:117] "RemoveContainer" containerID="ce7f11697148108bfce134226799fc929848fbda52cd577e7ffbfcdcfde0a771" Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.588727 4835 scope.go:117] "RemoveContainer" containerID="877fe7853cc705a5e5784aeb3f82305eb5fc811d61733747a672aaf8c24b06d4" Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.603730 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vb8vj"] Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.609894 4835 scope.go:117] "RemoveContainer" containerID="f83b8c3f76e750d1abcae78eb941cf549f3eea46fa523eb23a7b1a6a6eb4d216" Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.610003 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vb8vj"] Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.711371 4835 scope.go:117] "RemoveContainer" containerID="ce7f11697148108bfce134226799fc929848fbda52cd577e7ffbfcdcfde0a771" Feb 16 15:51:59 crc kubenswrapper[4835]: E0216 15:51:59.712239 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce7f11697148108bfce134226799fc929848fbda52cd577e7ffbfcdcfde0a771\": container with ID starting with ce7f11697148108bfce134226799fc929848fbda52cd577e7ffbfcdcfde0a771 not found: ID does not exist" containerID="ce7f11697148108bfce134226799fc929848fbda52cd577e7ffbfcdcfde0a771" Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.712273 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce7f11697148108bfce134226799fc929848fbda52cd577e7ffbfcdcfde0a771"} err="failed to get container status \"ce7f11697148108bfce134226799fc929848fbda52cd577e7ffbfcdcfde0a771\": rpc error: code = NotFound desc = could not find container \"ce7f11697148108bfce134226799fc929848fbda52cd577e7ffbfcdcfde0a771\": container with ID starting with ce7f11697148108bfce134226799fc929848fbda52cd577e7ffbfcdcfde0a771 not found: ID does not exist" Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.712295 4835 scope.go:117] "RemoveContainer" containerID="877fe7853cc705a5e5784aeb3f82305eb5fc811d61733747a672aaf8c24b06d4" Feb 16 15:51:59 crc kubenswrapper[4835]: E0216 15:51:59.712743 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"877fe7853cc705a5e5784aeb3f82305eb5fc811d61733747a672aaf8c24b06d4\": container with ID starting with 877fe7853cc705a5e5784aeb3f82305eb5fc811d61733747a672aaf8c24b06d4 not found: ID does not exist" containerID="877fe7853cc705a5e5784aeb3f82305eb5fc811d61733747a672aaf8c24b06d4" Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.712764 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"877fe7853cc705a5e5784aeb3f82305eb5fc811d61733747a672aaf8c24b06d4"} err="failed to get container status \"877fe7853cc705a5e5784aeb3f82305eb5fc811d61733747a672aaf8c24b06d4\": rpc error: code = NotFound desc = could not find container \"877fe7853cc705a5e5784aeb3f82305eb5fc811d61733747a672aaf8c24b06d4\": container with ID starting with 877fe7853cc705a5e5784aeb3f82305eb5fc811d61733747a672aaf8c24b06d4 not found: ID does not exist" Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.712795 4835 scope.go:117] "RemoveContainer" containerID="f83b8c3f76e750d1abcae78eb941cf549f3eea46fa523eb23a7b1a6a6eb4d216" Feb 16 15:51:59 crc kubenswrapper[4835]: E0216 15:51:59.713046 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f83b8c3f76e750d1abcae78eb941cf549f3eea46fa523eb23a7b1a6a6eb4d216\": container with ID starting with f83b8c3f76e750d1abcae78eb941cf549f3eea46fa523eb23a7b1a6a6eb4d216 not found: ID does not exist" containerID="f83b8c3f76e750d1abcae78eb941cf549f3eea46fa523eb23a7b1a6a6eb4d216" Feb 16 15:51:59 crc kubenswrapper[4835]: I0216 15:51:59.713073 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f83b8c3f76e750d1abcae78eb941cf549f3eea46fa523eb23a7b1a6a6eb4d216"} err="failed to get container status \"f83b8c3f76e750d1abcae78eb941cf549f3eea46fa523eb23a7b1a6a6eb4d216\": rpc error: code = NotFound desc = could not find container \"f83b8c3f76e750d1abcae78eb941cf549f3eea46fa523eb23a7b1a6a6eb4d216\": container with ID starting with f83b8c3f76e750d1abcae78eb941cf549f3eea46fa523eb23a7b1a6a6eb4d216 not found: ID does not exist" Feb 16 15:52:01 crc kubenswrapper[4835]: I0216 15:52:01.393575 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d8d877f-1819-4832-b7bd-690c62ddb4b3" path="/var/lib/kubelet/pods/0d8d877f-1819-4832-b7bd-690c62ddb4b3/volumes" Feb 16 15:52:10 crc kubenswrapper[4835]: E0216 15:52:10.380240 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:52:18 crc kubenswrapper[4835]: I0216 15:52:18.586780 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:52:18 crc kubenswrapper[4835]: I0216 15:52:18.587306 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:52:21 crc kubenswrapper[4835]: E0216 15:52:21.388482 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:52:32 crc kubenswrapper[4835]: E0216 15:52:32.382594 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:52:45 crc kubenswrapper[4835]: E0216 15:52:45.381020 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:52:48 crc kubenswrapper[4835]: I0216 15:52:48.586971 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:52:48 crc kubenswrapper[4835]: I0216 15:52:48.587317 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:52:58 crc kubenswrapper[4835]: E0216 15:52:58.381996 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:53:11 crc kubenswrapper[4835]: E0216 15:53:11.386342 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:53:18 crc kubenswrapper[4835]: I0216 15:53:18.587248 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:53:18 crc kubenswrapper[4835]: I0216 15:53:18.587972 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:53:18 crc kubenswrapper[4835]: I0216 15:53:18.588034 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:53:18 crc kubenswrapper[4835]: I0216 15:53:18.589011 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ed5073fbefd2b2c9b0e060f08d45524b6c48bb83dc82a090687f31f0b53f4d6"} pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:53:18 crc kubenswrapper[4835]: I0216 15:53:18.589081 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" containerID="cri-o://1ed5073fbefd2b2c9b0e060f08d45524b6c48bb83dc82a090687f31f0b53f4d6" gracePeriod=600 Feb 16 15:53:19 crc kubenswrapper[4835]: I0216 15:53:19.431084 4835 generic.go:334] "Generic (PLEG): container finished" podID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerID="1ed5073fbefd2b2c9b0e060f08d45524b6c48bb83dc82a090687f31f0b53f4d6" exitCode=0 Feb 16 15:53:19 crc kubenswrapper[4835]: I0216 15:53:19.431157 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerDied","Data":"1ed5073fbefd2b2c9b0e060f08d45524b6c48bb83dc82a090687f31f0b53f4d6"} Feb 16 15:53:19 crc kubenswrapper[4835]: I0216 15:53:19.431966 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerStarted","Data":"7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23"} Feb 16 15:53:19 crc kubenswrapper[4835]: I0216 15:53:19.432011 4835 scope.go:117] "RemoveContainer" containerID="3083ba214ab0984e51856c16944c53d6036fe1a56867a5a55e6c390d77cd0b19" Feb 16 15:53:24 crc kubenswrapper[4835]: E0216 15:53:24.381092 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:53:35 crc kubenswrapper[4835]: E0216 15:53:35.381432 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:53:49 crc kubenswrapper[4835]: E0216 15:53:49.380514 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:54:04 crc kubenswrapper[4835]: E0216 15:54:04.381446 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:54:18 crc kubenswrapper[4835]: E0216 15:54:18.381780 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:54:28 crc kubenswrapper[4835]: I0216 15:54:28.943422 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5m7r9/must-gather-xsr78"] Feb 16 15:54:28 crc kubenswrapper[4835]: E0216 15:54:28.945339 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d8d877f-1819-4832-b7bd-690c62ddb4b3" containerName="extract-utilities" Feb 16 15:54:28 crc kubenswrapper[4835]: I0216 15:54:28.945414 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d8d877f-1819-4832-b7bd-690c62ddb4b3" containerName="extract-utilities" Feb 16 15:54:28 crc kubenswrapper[4835]: E0216 15:54:28.945491 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d8d877f-1819-4832-b7bd-690c62ddb4b3" containerName="extract-content" Feb 16 15:54:28 crc kubenswrapper[4835]: I0216 15:54:28.945566 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d8d877f-1819-4832-b7bd-690c62ddb4b3" containerName="extract-content" Feb 16 15:54:28 crc kubenswrapper[4835]: E0216 15:54:28.945636 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d8d877f-1819-4832-b7bd-690c62ddb4b3" containerName="registry-server" Feb 16 15:54:28 crc kubenswrapper[4835]: I0216 15:54:28.945691 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d8d877f-1819-4832-b7bd-690c62ddb4b3" containerName="registry-server" Feb 16 15:54:28 crc kubenswrapper[4835]: I0216 15:54:28.945950 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d8d877f-1819-4832-b7bd-690c62ddb4b3" containerName="registry-server" Feb 16 15:54:28 crc kubenswrapper[4835]: I0216 15:54:28.947161 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5m7r9/must-gather-xsr78" Feb 16 15:54:28 crc kubenswrapper[4835]: I0216 15:54:28.950472 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5m7r9"/"kube-root-ca.crt" Feb 16 15:54:28 crc kubenswrapper[4835]: I0216 15:54:28.950467 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5m7r9"/"openshift-service-ca.crt" Feb 16 15:54:28 crc kubenswrapper[4835]: I0216 15:54:28.957152 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5m7r9/must-gather-xsr78"] Feb 16 15:54:29 crc kubenswrapper[4835]: I0216 15:54:29.092020 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtrdg\" (UniqueName: \"kubernetes.io/projected/173d5357-97c2-4bd8-822d-7fd2645c30fb-kube-api-access-jtrdg\") pod \"must-gather-xsr78\" (UID: \"173d5357-97c2-4bd8-822d-7fd2645c30fb\") " pod="openshift-must-gather-5m7r9/must-gather-xsr78" Feb 16 15:54:29 crc kubenswrapper[4835]: I0216 15:54:29.092318 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/173d5357-97c2-4bd8-822d-7fd2645c30fb-must-gather-output\") pod \"must-gather-xsr78\" (UID: \"173d5357-97c2-4bd8-822d-7fd2645c30fb\") " pod="openshift-must-gather-5m7r9/must-gather-xsr78" Feb 16 15:54:29 crc kubenswrapper[4835]: I0216 15:54:29.193942 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtrdg\" (UniqueName: \"kubernetes.io/projected/173d5357-97c2-4bd8-822d-7fd2645c30fb-kube-api-access-jtrdg\") pod \"must-gather-xsr78\" (UID: \"173d5357-97c2-4bd8-822d-7fd2645c30fb\") " pod="openshift-must-gather-5m7r9/must-gather-xsr78" Feb 16 15:54:29 crc kubenswrapper[4835]: I0216 15:54:29.193996 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/173d5357-97c2-4bd8-822d-7fd2645c30fb-must-gather-output\") pod \"must-gather-xsr78\" (UID: \"173d5357-97c2-4bd8-822d-7fd2645c30fb\") " pod="openshift-must-gather-5m7r9/must-gather-xsr78" Feb 16 15:54:29 crc kubenswrapper[4835]: I0216 15:54:29.194431 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/173d5357-97c2-4bd8-822d-7fd2645c30fb-must-gather-output\") pod \"must-gather-xsr78\" (UID: \"173d5357-97c2-4bd8-822d-7fd2645c30fb\") " pod="openshift-must-gather-5m7r9/must-gather-xsr78" Feb 16 15:54:29 crc kubenswrapper[4835]: I0216 15:54:29.223149 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtrdg\" (UniqueName: \"kubernetes.io/projected/173d5357-97c2-4bd8-822d-7fd2645c30fb-kube-api-access-jtrdg\") pod \"must-gather-xsr78\" (UID: \"173d5357-97c2-4bd8-822d-7fd2645c30fb\") " pod="openshift-must-gather-5m7r9/must-gather-xsr78" Feb 16 15:54:29 crc kubenswrapper[4835]: I0216 15:54:29.272390 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5m7r9/must-gather-xsr78" Feb 16 15:54:29 crc kubenswrapper[4835]: I0216 15:54:29.765479 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5m7r9/must-gather-xsr78"] Feb 16 15:54:30 crc kubenswrapper[4835]: I0216 15:54:30.064317 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5m7r9/must-gather-xsr78" event={"ID":"173d5357-97c2-4bd8-822d-7fd2645c30fb","Type":"ContainerStarted","Data":"bb9930c19af46ab9e0761d735c0dcfd190d9462ebba00ac08681c6a558212318"} Feb 16 15:54:30 crc kubenswrapper[4835]: E0216 15:54:30.387469 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:54:36 crc kubenswrapper[4835]: I0216 15:54:36.123575 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5m7r9/must-gather-xsr78" event={"ID":"173d5357-97c2-4bd8-822d-7fd2645c30fb","Type":"ContainerStarted","Data":"1bc8a64c5a5d6fa9a5f37ec839330739c92d1d2d13d6010253576ae208803cfc"} Feb 16 15:54:36 crc kubenswrapper[4835]: I0216 15:54:36.124087 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5m7r9/must-gather-xsr78" event={"ID":"173d5357-97c2-4bd8-822d-7fd2645c30fb","Type":"ContainerStarted","Data":"bdb9a21ded27832b593fac38d8541fbfdb7012cba743560d5eb748ae0d17a0ed"} Feb 16 15:54:40 crc kubenswrapper[4835]: I0216 15:54:40.848479 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5m7r9/must-gather-xsr78" podStartSLOduration=7.023400873 podStartE2EDuration="12.848462086s" podCreationTimestamp="2026-02-16 15:54:28 +0000 UTC" firstStartedPulling="2026-02-16 15:54:29.771332545 +0000 UTC m=+2819.063325450" lastFinishedPulling="2026-02-16 15:54:35.596393768 +0000 UTC m=+2824.888386663" observedRunningTime="2026-02-16 15:54:36.140058757 +0000 UTC m=+2825.432051652" watchObservedRunningTime="2026-02-16 15:54:40.848462086 +0000 UTC m=+2830.140454981" Feb 16 15:54:40 crc kubenswrapper[4835]: I0216 15:54:40.856703 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5m7r9/crc-debug-xrhm8"] Feb 16 15:54:40 crc kubenswrapper[4835]: I0216 15:54:40.858206 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5m7r9/crc-debug-xrhm8" Feb 16 15:54:40 crc kubenswrapper[4835]: I0216 15:54:40.860952 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5m7r9"/"default-dockercfg-sbfpf" Feb 16 15:54:40 crc kubenswrapper[4835]: I0216 15:54:40.883318 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/240b58d0-0a32-44e3-a0a3-c90e939df892-host\") pod \"crc-debug-xrhm8\" (UID: \"240b58d0-0a32-44e3-a0a3-c90e939df892\") " pod="openshift-must-gather-5m7r9/crc-debug-xrhm8" Feb 16 15:54:40 crc kubenswrapper[4835]: I0216 15:54:40.883379 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrhf2\" (UniqueName: \"kubernetes.io/projected/240b58d0-0a32-44e3-a0a3-c90e939df892-kube-api-access-qrhf2\") pod \"crc-debug-xrhm8\" (UID: \"240b58d0-0a32-44e3-a0a3-c90e939df892\") " pod="openshift-must-gather-5m7r9/crc-debug-xrhm8" Feb 16 15:54:40 crc kubenswrapper[4835]: I0216 15:54:40.984937 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/240b58d0-0a32-44e3-a0a3-c90e939df892-host\") pod \"crc-debug-xrhm8\" (UID: \"240b58d0-0a32-44e3-a0a3-c90e939df892\") " pod="openshift-must-gather-5m7r9/crc-debug-xrhm8" Feb 16 15:54:40 crc kubenswrapper[4835]: I0216 15:54:40.984980 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/240b58d0-0a32-44e3-a0a3-c90e939df892-host\") pod \"crc-debug-xrhm8\" (UID: \"240b58d0-0a32-44e3-a0a3-c90e939df892\") " pod="openshift-must-gather-5m7r9/crc-debug-xrhm8" Feb 16 15:54:40 crc kubenswrapper[4835]: I0216 15:54:40.984994 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrhf2\" (UniqueName: \"kubernetes.io/projected/240b58d0-0a32-44e3-a0a3-c90e939df892-kube-api-access-qrhf2\") pod \"crc-debug-xrhm8\" (UID: \"240b58d0-0a32-44e3-a0a3-c90e939df892\") " pod="openshift-must-gather-5m7r9/crc-debug-xrhm8" Feb 16 15:54:41 crc kubenswrapper[4835]: I0216 15:54:41.002719 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrhf2\" (UniqueName: \"kubernetes.io/projected/240b58d0-0a32-44e3-a0a3-c90e939df892-kube-api-access-qrhf2\") pod \"crc-debug-xrhm8\" (UID: \"240b58d0-0a32-44e3-a0a3-c90e939df892\") " pod="openshift-must-gather-5m7r9/crc-debug-xrhm8" Feb 16 15:54:41 crc kubenswrapper[4835]: I0216 15:54:41.186444 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5m7r9/crc-debug-xrhm8" Feb 16 15:54:42 crc kubenswrapper[4835]: I0216 15:54:42.179435 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5m7r9/crc-debug-xrhm8" event={"ID":"240b58d0-0a32-44e3-a0a3-c90e939df892","Type":"ContainerStarted","Data":"1a62a3c737f3c28a1790d3056ee8c564eb03685e93c77ec8cd210f8b54db02e4"} Feb 16 15:54:44 crc kubenswrapper[4835]: E0216 15:54:44.381026 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:54:53 crc kubenswrapper[4835]: I0216 15:54:53.298410 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5m7r9/crc-debug-xrhm8" event={"ID":"240b58d0-0a32-44e3-a0a3-c90e939df892","Type":"ContainerStarted","Data":"760cce77a4bb11b6e3379ae8f039fd8d97117a1f115f97d30b6834bdab05d6e7"} Feb 16 15:54:56 crc kubenswrapper[4835]: E0216 15:54:56.381368 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:55:08 crc kubenswrapper[4835]: I0216 15:55:08.416617 4835 generic.go:334] "Generic (PLEG): container finished" podID="240b58d0-0a32-44e3-a0a3-c90e939df892" containerID="760cce77a4bb11b6e3379ae8f039fd8d97117a1f115f97d30b6834bdab05d6e7" exitCode=0 Feb 16 15:55:08 crc kubenswrapper[4835]: I0216 15:55:08.417567 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5m7r9/crc-debug-xrhm8" event={"ID":"240b58d0-0a32-44e3-a0a3-c90e939df892","Type":"ContainerDied","Data":"760cce77a4bb11b6e3379ae8f039fd8d97117a1f115f97d30b6834bdab05d6e7"} Feb 16 15:55:09 crc kubenswrapper[4835]: I0216 15:55:09.552061 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5m7r9/crc-debug-xrhm8" Feb 16 15:55:09 crc kubenswrapper[4835]: I0216 15:55:09.587600 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5m7r9/crc-debug-xrhm8"] Feb 16 15:55:09 crc kubenswrapper[4835]: I0216 15:55:09.598194 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5m7r9/crc-debug-xrhm8"] Feb 16 15:55:09 crc kubenswrapper[4835]: I0216 15:55:09.630672 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrhf2\" (UniqueName: \"kubernetes.io/projected/240b58d0-0a32-44e3-a0a3-c90e939df892-kube-api-access-qrhf2\") pod \"240b58d0-0a32-44e3-a0a3-c90e939df892\" (UID: \"240b58d0-0a32-44e3-a0a3-c90e939df892\") " Feb 16 15:55:09 crc kubenswrapper[4835]: I0216 15:55:09.630725 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/240b58d0-0a32-44e3-a0a3-c90e939df892-host\") pod \"240b58d0-0a32-44e3-a0a3-c90e939df892\" (UID: \"240b58d0-0a32-44e3-a0a3-c90e939df892\") " Feb 16 15:55:09 crc kubenswrapper[4835]: I0216 15:55:09.630886 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/240b58d0-0a32-44e3-a0a3-c90e939df892-host" (OuterVolumeSpecName: "host") pod "240b58d0-0a32-44e3-a0a3-c90e939df892" (UID: "240b58d0-0a32-44e3-a0a3-c90e939df892"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:55:09 crc kubenswrapper[4835]: I0216 15:55:09.631300 4835 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/240b58d0-0a32-44e3-a0a3-c90e939df892-host\") on node \"crc\" DevicePath \"\"" Feb 16 15:55:09 crc kubenswrapper[4835]: I0216 15:55:09.652688 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/240b58d0-0a32-44e3-a0a3-c90e939df892-kube-api-access-qrhf2" (OuterVolumeSpecName: "kube-api-access-qrhf2") pod "240b58d0-0a32-44e3-a0a3-c90e939df892" (UID: "240b58d0-0a32-44e3-a0a3-c90e939df892"). InnerVolumeSpecName "kube-api-access-qrhf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:55:09 crc kubenswrapper[4835]: I0216 15:55:09.732939 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrhf2\" (UniqueName: \"kubernetes.io/projected/240b58d0-0a32-44e3-a0a3-c90e939df892-kube-api-access-qrhf2\") on node \"crc\" DevicePath \"\"" Feb 16 15:55:10 crc kubenswrapper[4835]: E0216 15:55:10.382404 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:55:10 crc kubenswrapper[4835]: I0216 15:55:10.436664 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a62a3c737f3c28a1790d3056ee8c564eb03685e93c77ec8cd210f8b54db02e4" Feb 16 15:55:10 crc kubenswrapper[4835]: I0216 15:55:10.436723 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5m7r9/crc-debug-xrhm8" Feb 16 15:55:10 crc kubenswrapper[4835]: I0216 15:55:10.822335 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5m7r9/crc-debug-4xdxj"] Feb 16 15:55:10 crc kubenswrapper[4835]: E0216 15:55:10.823055 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240b58d0-0a32-44e3-a0a3-c90e939df892" containerName="container-00" Feb 16 15:55:10 crc kubenswrapper[4835]: I0216 15:55:10.823069 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="240b58d0-0a32-44e3-a0a3-c90e939df892" containerName="container-00" Feb 16 15:55:10 crc kubenswrapper[4835]: I0216 15:55:10.823269 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="240b58d0-0a32-44e3-a0a3-c90e939df892" containerName="container-00" Feb 16 15:55:10 crc kubenswrapper[4835]: I0216 15:55:10.823974 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5m7r9/crc-debug-4xdxj" Feb 16 15:55:10 crc kubenswrapper[4835]: I0216 15:55:10.827029 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5m7r9"/"default-dockercfg-sbfpf" Feb 16 15:55:10 crc kubenswrapper[4835]: I0216 15:55:10.957751 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66jtk\" (UniqueName: \"kubernetes.io/projected/f9066763-e663-4075-a464-5c2eeb29e187-kube-api-access-66jtk\") pod \"crc-debug-4xdxj\" (UID: \"f9066763-e663-4075-a464-5c2eeb29e187\") " pod="openshift-must-gather-5m7r9/crc-debug-4xdxj" Feb 16 15:55:10 crc kubenswrapper[4835]: I0216 15:55:10.957891 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9066763-e663-4075-a464-5c2eeb29e187-host\") pod \"crc-debug-4xdxj\" (UID: \"f9066763-e663-4075-a464-5c2eeb29e187\") " pod="openshift-must-gather-5m7r9/crc-debug-4xdxj" Feb 16 15:55:11 crc kubenswrapper[4835]: I0216 15:55:11.060396 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9066763-e663-4075-a464-5c2eeb29e187-host\") pod \"crc-debug-4xdxj\" (UID: \"f9066763-e663-4075-a464-5c2eeb29e187\") " pod="openshift-must-gather-5m7r9/crc-debug-4xdxj" Feb 16 15:55:11 crc kubenswrapper[4835]: I0216 15:55:11.060575 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9066763-e663-4075-a464-5c2eeb29e187-host\") pod \"crc-debug-4xdxj\" (UID: \"f9066763-e663-4075-a464-5c2eeb29e187\") " pod="openshift-must-gather-5m7r9/crc-debug-4xdxj" Feb 16 15:55:11 crc kubenswrapper[4835]: I0216 15:55:11.060622 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66jtk\" (UniqueName: \"kubernetes.io/projected/f9066763-e663-4075-a464-5c2eeb29e187-kube-api-access-66jtk\") pod \"crc-debug-4xdxj\" (UID: \"f9066763-e663-4075-a464-5c2eeb29e187\") " pod="openshift-must-gather-5m7r9/crc-debug-4xdxj" Feb 16 15:55:11 crc kubenswrapper[4835]: I0216 15:55:11.080510 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66jtk\" (UniqueName: \"kubernetes.io/projected/f9066763-e663-4075-a464-5c2eeb29e187-kube-api-access-66jtk\") pod \"crc-debug-4xdxj\" (UID: \"f9066763-e663-4075-a464-5c2eeb29e187\") " pod="openshift-must-gather-5m7r9/crc-debug-4xdxj" Feb 16 15:55:11 crc kubenswrapper[4835]: I0216 15:55:11.145272 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5m7r9/crc-debug-4xdxj" Feb 16 15:55:11 crc kubenswrapper[4835]: W0216 15:55:11.174981 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9066763_e663_4075_a464_5c2eeb29e187.slice/crio-36eec2d52d1f53b8f101d299b8906737a2dd9a42ea9ec73a5be264c7515fb71a WatchSource:0}: Error finding container 36eec2d52d1f53b8f101d299b8906737a2dd9a42ea9ec73a5be264c7515fb71a: Status 404 returned error can't find the container with id 36eec2d52d1f53b8f101d299b8906737a2dd9a42ea9ec73a5be264c7515fb71a Feb 16 15:55:11 crc kubenswrapper[4835]: I0216 15:55:11.400041 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="240b58d0-0a32-44e3-a0a3-c90e939df892" path="/var/lib/kubelet/pods/240b58d0-0a32-44e3-a0a3-c90e939df892/volumes" Feb 16 15:55:11 crc kubenswrapper[4835]: I0216 15:55:11.467947 4835 generic.go:334] "Generic (PLEG): container finished" podID="f9066763-e663-4075-a464-5c2eeb29e187" containerID="894e6374c757de1b1cc350ab873f36dcd18abd71e4abcc5edb57d1ca06a95f50" exitCode=1 Feb 16 15:55:11 crc kubenswrapper[4835]: I0216 15:55:11.467993 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5m7r9/crc-debug-4xdxj" event={"ID":"f9066763-e663-4075-a464-5c2eeb29e187","Type":"ContainerDied","Data":"894e6374c757de1b1cc350ab873f36dcd18abd71e4abcc5edb57d1ca06a95f50"} Feb 16 15:55:11 crc kubenswrapper[4835]: I0216 15:55:11.468018 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5m7r9/crc-debug-4xdxj" event={"ID":"f9066763-e663-4075-a464-5c2eeb29e187","Type":"ContainerStarted","Data":"36eec2d52d1f53b8f101d299b8906737a2dd9a42ea9ec73a5be264c7515fb71a"} Feb 16 15:55:11 crc kubenswrapper[4835]: I0216 15:55:11.517322 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5m7r9/crc-debug-4xdxj"] Feb 16 15:55:11 crc kubenswrapper[4835]: I0216 15:55:11.530954 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5m7r9/crc-debug-4xdxj"] Feb 16 15:55:12 crc kubenswrapper[4835]: I0216 15:55:12.587706 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5m7r9/crc-debug-4xdxj" Feb 16 15:55:12 crc kubenswrapper[4835]: I0216 15:55:12.689452 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9066763-e663-4075-a464-5c2eeb29e187-host\") pod \"f9066763-e663-4075-a464-5c2eeb29e187\" (UID: \"f9066763-e663-4075-a464-5c2eeb29e187\") " Feb 16 15:55:12 crc kubenswrapper[4835]: I0216 15:55:12.689515 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9066763-e663-4075-a464-5c2eeb29e187-host" (OuterVolumeSpecName: "host") pod "f9066763-e663-4075-a464-5c2eeb29e187" (UID: "f9066763-e663-4075-a464-5c2eeb29e187"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:55:12 crc kubenswrapper[4835]: I0216 15:55:12.689550 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66jtk\" (UniqueName: \"kubernetes.io/projected/f9066763-e663-4075-a464-5c2eeb29e187-kube-api-access-66jtk\") pod \"f9066763-e663-4075-a464-5c2eeb29e187\" (UID: \"f9066763-e663-4075-a464-5c2eeb29e187\") " Feb 16 15:55:12 crc kubenswrapper[4835]: I0216 15:55:12.690763 4835 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9066763-e663-4075-a464-5c2eeb29e187-host\") on node \"crc\" DevicePath \"\"" Feb 16 15:55:12 crc kubenswrapper[4835]: I0216 15:55:12.702859 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9066763-e663-4075-a464-5c2eeb29e187-kube-api-access-66jtk" (OuterVolumeSpecName: "kube-api-access-66jtk") pod "f9066763-e663-4075-a464-5c2eeb29e187" (UID: "f9066763-e663-4075-a464-5c2eeb29e187"). InnerVolumeSpecName "kube-api-access-66jtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:55:12 crc kubenswrapper[4835]: I0216 15:55:12.795539 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66jtk\" (UniqueName: \"kubernetes.io/projected/f9066763-e663-4075-a464-5c2eeb29e187-kube-api-access-66jtk\") on node \"crc\" DevicePath \"\"" Feb 16 15:55:13 crc kubenswrapper[4835]: I0216 15:55:13.389630 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9066763-e663-4075-a464-5c2eeb29e187" path="/var/lib/kubelet/pods/f9066763-e663-4075-a464-5c2eeb29e187/volumes" Feb 16 15:55:13 crc kubenswrapper[4835]: I0216 15:55:13.487708 4835 scope.go:117] "RemoveContainer" containerID="894e6374c757de1b1cc350ab873f36dcd18abd71e4abcc5edb57d1ca06a95f50" Feb 16 15:55:13 crc kubenswrapper[4835]: I0216 15:55:13.487885 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5m7r9/crc-debug-4xdxj" Feb 16 15:55:13 crc kubenswrapper[4835]: E0216 15:55:13.593566 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9066763_e663_4075_a464_5c2eeb29e187.slice\": RecentStats: unable to find data in memory cache]" Feb 16 15:55:18 crc kubenswrapper[4835]: I0216 15:55:18.826115 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:55:18 crc kubenswrapper[4835]: I0216 15:55:18.826726 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:55:23 crc kubenswrapper[4835]: E0216 15:55:23.380789 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:55:34 crc kubenswrapper[4835]: E0216 15:55:34.381317 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:55:47 crc kubenswrapper[4835]: E0216 15:55:47.383803 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:55:48 crc kubenswrapper[4835]: I0216 15:55:48.586433 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:55:48 crc kubenswrapper[4835]: I0216 15:55:48.586496 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:56:00 crc kubenswrapper[4835]: E0216 15:56:00.380907 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:56:01 crc kubenswrapper[4835]: I0216 15:56:01.338161 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_23555de7-4851-4730-b8b3-9d788622420a/init-config-reloader/0.log" Feb 16 15:56:01 crc kubenswrapper[4835]: I0216 15:56:01.563551 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_23555de7-4851-4730-b8b3-9d788622420a/init-config-reloader/0.log" Feb 16 15:56:01 crc kubenswrapper[4835]: I0216 15:56:01.571615 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_23555de7-4851-4730-b8b3-9d788622420a/config-reloader/0.log" Feb 16 15:56:01 crc kubenswrapper[4835]: I0216 15:56:01.574670 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_23555de7-4851-4730-b8b3-9d788622420a/alertmanager/0.log" Feb 16 15:56:01 crc kubenswrapper[4835]: I0216 15:56:01.750062 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-555c89cd64-w6qv5_45d4ee34-3b8b-407a-a8f9-e31b32377c0c/barbican-api/0.log" Feb 16 15:56:01 crc kubenswrapper[4835]: I0216 15:56:01.754355 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-555c89cd64-w6qv5_45d4ee34-3b8b-407a-a8f9-e31b32377c0c/barbican-api-log/0.log" Feb 16 15:56:01 crc kubenswrapper[4835]: I0216 15:56:01.832308 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7764659d9b-m4t6j_b2e52d59-832d-4a4a-ab60-b288415a7622/barbican-keystone-listener/0.log" Feb 16 15:56:01 crc kubenswrapper[4835]: I0216 15:56:01.941310 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7764659d9b-m4t6j_b2e52d59-832d-4a4a-ab60-b288415a7622/barbican-keystone-listener-log/0.log" Feb 16 15:56:01 crc kubenswrapper[4835]: I0216 15:56:01.975590 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c8dc89fcf-gqdlj_6b73d45a-cedb-4986-b66a-89a4aa44c1c5/barbican-worker/0.log" Feb 16 15:56:02 crc kubenswrapper[4835]: I0216 15:56:02.023385 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c8dc89fcf-gqdlj_6b73d45a-cedb-4986-b66a-89a4aa44c1c5/barbican-worker-log/0.log" Feb 16 15:56:02 crc kubenswrapper[4835]: I0216 15:56:02.196225 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b51fc856-a532-470f-aa5e-349bc749062b/ceilometer-central-agent/0.log" Feb 16 15:56:02 crc kubenswrapper[4835]: I0216 15:56:02.238567 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b51fc856-a532-470f-aa5e-349bc749062b/ceilometer-notification-agent/0.log" Feb 16 15:56:02 crc kubenswrapper[4835]: I0216 15:56:02.248024 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b51fc856-a532-470f-aa5e-349bc749062b/proxy-httpd/0.log" Feb 16 15:56:02 crc kubenswrapper[4835]: I0216 15:56:02.362270 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b51fc856-a532-470f-aa5e-349bc749062b/sg-core/0.log" Feb 16 15:56:02 crc kubenswrapper[4835]: I0216 15:56:02.471075 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4b239b59-1e9e-41e1-aa08-effb3b5cd78d/cinder-api/0.log" Feb 16 15:56:02 crc kubenswrapper[4835]: I0216 15:56:02.478219 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4b239b59-1e9e-41e1-aa08-effb3b5cd78d/cinder-api-log/0.log" Feb 16 15:56:02 crc kubenswrapper[4835]: I0216 15:56:02.623298 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc/cinder-scheduler/0.log" Feb 16 15:56:02 crc kubenswrapper[4835]: I0216 15:56:02.696275 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ff575bdb-76c0-4c2d-8fb5-fa5f7efbadfc/probe/0.log" Feb 16 15:56:02 crc kubenswrapper[4835]: I0216 15:56:02.906082 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_134112d9-c103-4429-b224-13589ad6d931/loki-compactor/0.log" Feb 16 15:56:02 crc kubenswrapper[4835]: I0216 15:56:02.995744 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-585d9bcbc-95gmb_f0df7f89-f92f-4f95-8150-5f864d8d4134/loki-distributor/0.log" Feb 16 15:56:03 crc kubenswrapper[4835]: I0216 15:56:03.116362 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-g5p5k_cbd2b381-c620-4ff8-9942-e9f5b1c484d4/gateway/0.log" Feb 16 15:56:03 crc kubenswrapper[4835]: I0216 15:56:03.181800 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-rqzc2_0724b33e-42df-4030-98fe-cf498befbf2e/gateway/0.log" Feb 16 15:56:03 crc kubenswrapper[4835]: I0216 15:56:03.308837 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_ef4ec5b3-b0ad-4a36-a280-67da2ffb786e/loki-index-gateway/0.log" Feb 16 15:56:03 crc kubenswrapper[4835]: I0216 15:56:03.474498 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_e246a943-0c6d-4738-8a73-d3e576819680/loki-ingester/0.log" Feb 16 15:56:03 crc kubenswrapper[4835]: I0216 15:56:03.556502 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-58c84b5844-ts8jt_15cb4b80-ac3e-407f-ac7d-b18c4f936241/loki-querier/0.log" Feb 16 15:56:03 crc kubenswrapper[4835]: I0216 15:56:03.643775 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-67bb4dfcd8-crjgb_3638d231-c31c-4620-b3e1-d45083acee56/loki-query-frontend/0.log" Feb 16 15:56:03 crc kubenswrapper[4835]: I0216 15:56:03.756366 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-c8sbw_04860880-2e34-4a63-b26b-e1bb6163560f/init/0.log" Feb 16 15:56:03 crc kubenswrapper[4835]: I0216 15:56:03.901864 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-c8sbw_04860880-2e34-4a63-b26b-e1bb6163560f/dnsmasq-dns/0.log" Feb 16 15:56:03 crc kubenswrapper[4835]: I0216 15:56:03.913820 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-c8sbw_04860880-2e34-4a63-b26b-e1bb6163560f/init/0.log" Feb 16 15:56:03 crc kubenswrapper[4835]: I0216 15:56:03.963895 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c/glance-httpd/0.log" Feb 16 15:56:04 crc kubenswrapper[4835]: I0216 15:56:04.128156 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d5eb4ee0-adf1-4d3f-bb3d-d2e3f12da75c/glance-log/0.log" Feb 16 15:56:04 crc kubenswrapper[4835]: I0216 15:56:04.195136 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b05d3637-75bb-4e72-85d0-d130d949a503/glance-httpd/0.log" Feb 16 15:56:04 crc kubenswrapper[4835]: I0216 15:56:04.227507 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b05d3637-75bb-4e72-85d0-d130d949a503/glance-log/0.log" Feb 16 15:56:04 crc kubenswrapper[4835]: I0216 15:56:04.385230 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6f568f5d7f-9h6bh_1250312c-ef9e-416f-a06d-72d1d31f433f/keystone-api/0.log" Feb 16 15:56:04 crc kubenswrapper[4835]: I0216 15:56:04.472968 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f9fce61c-9bbc-46df-9441-890185c4c526/kube-state-metrics/0.log" Feb 16 15:56:04 crc kubenswrapper[4835]: I0216 15:56:04.716722 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77567867dc-2fttf_3f3ac245-33a3-4481-8139-1d26969a6a94/neutron-api/0.log" Feb 16 15:56:04 crc kubenswrapper[4835]: I0216 15:56:04.834736 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77567867dc-2fttf_3f3ac245-33a3-4481-8139-1d26969a6a94/neutron-httpd/0.log" Feb 16 15:56:05 crc kubenswrapper[4835]: I0216 15:56:05.208236 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_04aa3f43-11db-4f49-81bc-a8d6e225020e/nova-api-log/0.log" Feb 16 15:56:05 crc kubenswrapper[4835]: I0216 15:56:05.221539 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_04aa3f43-11db-4f49-81bc-a8d6e225020e/nova-api-api/0.log" Feb 16 15:56:05 crc kubenswrapper[4835]: I0216 15:56:05.457611 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f3655b0a-95e0-4eac-95c6-07197479c042/nova-cell0-conductor-conductor/0.log" Feb 16 15:56:05 crc kubenswrapper[4835]: I0216 15:56:05.559268 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_91b52c41-2afa-4ee9-8239-5ffaf418e1f1/nova-cell1-conductor-conductor/0.log" Feb 16 15:56:05 crc kubenswrapper[4835]: I0216 15:56:05.695466 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c90c22cd-b62e-4d0e-bf45-ba03b2241ba7/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 15:56:05 crc kubenswrapper[4835]: I0216 15:56:05.791335 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7dc85a39-1037-45d8-9221-f4d30e0b01f7/nova-metadata-log/0.log" Feb 16 15:56:06 crc kubenswrapper[4835]: I0216 15:56:06.106820 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_8781e3b3-2b5c-4a33-9cb4-f21080cd6743/nova-scheduler-scheduler/0.log" Feb 16 15:56:06 crc kubenswrapper[4835]: I0216 15:56:06.207107 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_752083aa-579e-46dc-addb-b923b394b393/mysql-bootstrap/0.log" Feb 16 15:56:06 crc kubenswrapper[4835]: I0216 15:56:06.417597 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_752083aa-579e-46dc-addb-b923b394b393/mysql-bootstrap/0.log" Feb 16 15:56:06 crc kubenswrapper[4835]: I0216 15:56:06.442207 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_752083aa-579e-46dc-addb-b923b394b393/galera/0.log" Feb 16 15:56:06 crc kubenswrapper[4835]: I0216 15:56:06.510151 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7dc85a39-1037-45d8-9221-f4d30e0b01f7/nova-metadata-metadata/0.log" Feb 16 15:56:06 crc kubenswrapper[4835]: I0216 15:56:06.630108 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_32e52175-bb63-4076-a7af-4cf969b90ec6/mysql-bootstrap/0.log" Feb 16 15:56:06 crc kubenswrapper[4835]: I0216 15:56:06.851504 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_32e52175-bb63-4076-a7af-4cf969b90ec6/galera/0.log" Feb 16 15:56:06 crc kubenswrapper[4835]: I0216 15:56:06.887256 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_32e52175-bb63-4076-a7af-4cf969b90ec6/mysql-bootstrap/0.log" Feb 16 15:56:06 crc kubenswrapper[4835]: I0216 15:56:06.923806 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a0764574-e9ce-46bd-9cf5-7aefa9b455db/openstackclient/0.log" Feb 16 15:56:07 crc kubenswrapper[4835]: I0216 15:56:07.095465 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kxl4t_a510fbea-dfa0-48e9-9557-a9e7f75cae9a/ovsdb-server-init/0.log" Feb 16 15:56:07 crc kubenswrapper[4835]: I0216 15:56:07.117088 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2gz9j_80e12b34-9a09-485e-b0b0-bcf8ee4ed5ce/openstack-network-exporter/0.log" Feb 16 15:56:07 crc kubenswrapper[4835]: I0216 15:56:07.419931 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kxl4t_a510fbea-dfa0-48e9-9557-a9e7f75cae9a/ovs-vswitchd/0.log" Feb 16 15:56:07 crc kubenswrapper[4835]: I0216 15:56:07.426739 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kxl4t_a510fbea-dfa0-48e9-9557-a9e7f75cae9a/ovsdb-server-init/0.log" Feb 16 15:56:07 crc kubenswrapper[4835]: I0216 15:56:07.469682 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kxl4t_a510fbea-dfa0-48e9-9557-a9e7f75cae9a/ovsdb-server/0.log" Feb 16 15:56:07 crc kubenswrapper[4835]: I0216 15:56:07.622715 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-st4vx_2efbff9d-b303-430c-b06c-36b79284a3f1/ovn-controller/0.log" Feb 16 15:56:07 crc kubenswrapper[4835]: I0216 15:56:07.659278 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_01805b9f-62e0-4054-84e9-6e6e6a448afc/openstack-network-exporter/0.log" Feb 16 15:56:07 crc kubenswrapper[4835]: I0216 15:56:07.822457 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_01805b9f-62e0-4054-84e9-6e6e6a448afc/ovn-northd/0.log" Feb 16 15:56:07 crc kubenswrapper[4835]: I0216 15:56:07.952019 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c42ab514-0d06-4182-9ef7-6bcd9fb2afd8/ovsdbserver-nb/0.log" Feb 16 15:56:07 crc kubenswrapper[4835]: I0216 15:56:07.961385 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c42ab514-0d06-4182-9ef7-6bcd9fb2afd8/openstack-network-exporter/0.log" Feb 16 15:56:08 crc kubenswrapper[4835]: I0216 15:56:08.342437 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1949747b-769a-41b5-96cc-5d51092d1615/openstack-network-exporter/0.log" Feb 16 15:56:08 crc kubenswrapper[4835]: I0216 15:56:08.383889 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1949747b-769a-41b5-96cc-5d51092d1615/ovsdbserver-sb/0.log" Feb 16 15:56:08 crc kubenswrapper[4835]: I0216 15:56:08.457745 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db8d74b8d-b8dp6_ad10e2a4-7521-47a2-bf1f-d1e4b83b1136/placement-api/0.log" Feb 16 15:56:08 crc kubenswrapper[4835]: I0216 15:56:08.603102 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db8d74b8d-b8dp6_ad10e2a4-7521-47a2-bf1f-d1e4b83b1136/placement-log/0.log" Feb 16 15:56:08 crc kubenswrapper[4835]: I0216 15:56:08.644016 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a20bb04f-11d1-4f24-a96d-2c451f98b8bd/init-config-reloader/0.log" Feb 16 15:56:08 crc kubenswrapper[4835]: I0216 15:56:08.848606 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a20bb04f-11d1-4f24-a96d-2c451f98b8bd/prometheus/0.log" Feb 16 15:56:08 crc kubenswrapper[4835]: I0216 15:56:08.899492 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a20bb04f-11d1-4f24-a96d-2c451f98b8bd/init-config-reloader/0.log" Feb 16 15:56:08 crc kubenswrapper[4835]: I0216 15:56:08.910173 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a20bb04f-11d1-4f24-a96d-2c451f98b8bd/config-reloader/0.log" Feb 16 15:56:08 crc kubenswrapper[4835]: I0216 15:56:08.945572 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a20bb04f-11d1-4f24-a96d-2c451f98b8bd/thanos-sidecar/0.log" Feb 16 15:56:09 crc kubenswrapper[4835]: I0216 15:56:09.103761 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9c673663-5be6-4ed4-b2b5-9a80e72391c6/setup-container/0.log" Feb 16 15:56:09 crc kubenswrapper[4835]: I0216 15:56:09.374597 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9c673663-5be6-4ed4-b2b5-9a80e72391c6/setup-container/0.log" Feb 16 15:56:09 crc kubenswrapper[4835]: I0216 15:56:09.406722 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_02aa07ee-7fa8-40e8-bd6a-2c98dc10edda/setup-container/0.log" Feb 16 15:56:09 crc kubenswrapper[4835]: I0216 15:56:09.427718 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9c673663-5be6-4ed4-b2b5-9a80e72391c6/rabbitmq/0.log" Feb 16 15:56:09 crc kubenswrapper[4835]: I0216 15:56:09.631448 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_02aa07ee-7fa8-40e8-bd6a-2c98dc10edda/rabbitmq/0.log" Feb 16 15:56:09 crc kubenswrapper[4835]: I0216 15:56:09.675403 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_02aa07ee-7fa8-40e8-bd6a-2c98dc10edda/setup-container/0.log" Feb 16 15:56:09 crc kubenswrapper[4835]: I0216 15:56:09.718343 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-bb74b7cf9-wlnq6_cb2fb4f7-7c30-454f-8b06-8ab94eed8429/proxy-httpd/0.log" Feb 16 15:56:09 crc kubenswrapper[4835]: I0216 15:56:09.839348 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-bb74b7cf9-wlnq6_cb2fb4f7-7c30-454f-8b06-8ab94eed8429/proxy-server/0.log" Feb 16 15:56:09 crc kubenswrapper[4835]: I0216 15:56:09.928254 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-rckvw_e13e8e74-7a87-4ed8-b8c7-91ec164f5ce5/swift-ring-rebalance/0.log" Feb 16 15:56:10 crc kubenswrapper[4835]: I0216 15:56:10.096127 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89ca92c6-cb91-49f1-a005-047759f93742/account-auditor/0.log" Feb 16 15:56:10 crc kubenswrapper[4835]: I0216 15:56:10.183179 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89ca92c6-cb91-49f1-a005-047759f93742/account-replicator/0.log" Feb 16 15:56:10 crc kubenswrapper[4835]: I0216 15:56:10.225872 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89ca92c6-cb91-49f1-a005-047759f93742/account-reaper/0.log" Feb 16 15:56:10 crc kubenswrapper[4835]: I0216 15:56:10.283948 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89ca92c6-cb91-49f1-a005-047759f93742/account-server/0.log" Feb 16 15:56:10 crc kubenswrapper[4835]: I0216 15:56:10.340972 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89ca92c6-cb91-49f1-a005-047759f93742/container-auditor/0.log" Feb 16 15:56:10 crc kubenswrapper[4835]: I0216 15:56:10.446648 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89ca92c6-cb91-49f1-a005-047759f93742/container-replicator/0.log" Feb 16 15:56:10 crc kubenswrapper[4835]: I0216 15:56:10.485339 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89ca92c6-cb91-49f1-a005-047759f93742/container-server/0.log" Feb 16 15:56:10 crc kubenswrapper[4835]: I0216 15:56:10.508406 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89ca92c6-cb91-49f1-a005-047759f93742/container-updater/0.log" Feb 16 15:56:10 crc kubenswrapper[4835]: I0216 15:56:10.577788 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89ca92c6-cb91-49f1-a005-047759f93742/object-auditor/0.log" Feb 16 15:56:10 crc kubenswrapper[4835]: I0216 15:56:10.655111 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89ca92c6-cb91-49f1-a005-047759f93742/object-expirer/0.log" Feb 16 15:56:10 crc kubenswrapper[4835]: I0216 15:56:10.694947 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89ca92c6-cb91-49f1-a005-047759f93742/object-replicator/0.log" Feb 16 15:56:10 crc kubenswrapper[4835]: I0216 15:56:10.724697 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89ca92c6-cb91-49f1-a005-047759f93742/object-server/0.log" Feb 16 15:56:10 crc kubenswrapper[4835]: I0216 15:56:10.770781 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89ca92c6-cb91-49f1-a005-047759f93742/object-updater/0.log" Feb 16 15:56:10 crc kubenswrapper[4835]: I0216 15:56:10.871869 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89ca92c6-cb91-49f1-a005-047759f93742/rsync/0.log" Feb 16 15:56:10 crc kubenswrapper[4835]: I0216 15:56:10.908762 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89ca92c6-cb91-49f1-a005-047759f93742/swift-recon-cron/0.log" Feb 16 15:56:14 crc kubenswrapper[4835]: I0216 15:56:14.278586 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8da5ef68-09c6-4938-99a8-b728f03b4d14/memcached/0.log" Feb 16 15:56:15 crc kubenswrapper[4835]: E0216 15:56:15.381615 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:56:18 crc kubenswrapper[4835]: I0216 15:56:18.586394 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:56:18 crc kubenswrapper[4835]: I0216 15:56:18.587034 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:56:18 crc kubenswrapper[4835]: I0216 15:56:18.587107 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" Feb 16 15:56:18 crc kubenswrapper[4835]: I0216 15:56:18.588268 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23"} pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:56:18 crc kubenswrapper[4835]: I0216 15:56:18.588347 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" containerID="cri-o://7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" gracePeriod=600 Feb 16 15:56:18 crc kubenswrapper[4835]: E0216 15:56:18.723502 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:56:19 crc kubenswrapper[4835]: I0216 15:56:19.418736 4835 generic.go:334] "Generic (PLEG): container finished" podID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" exitCode=0 Feb 16 15:56:19 crc kubenswrapper[4835]: I0216 15:56:19.418824 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerDied","Data":"7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23"} Feb 16 15:56:19 crc kubenswrapper[4835]: I0216 15:56:19.418990 4835 scope.go:117] "RemoveContainer" containerID="1ed5073fbefd2b2c9b0e060f08d45524b6c48bb83dc82a090687f31f0b53f4d6" Feb 16 15:56:19 crc kubenswrapper[4835]: I0216 15:56:19.420044 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:56:19 crc kubenswrapper[4835]: E0216 15:56:19.420302 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:56:28 crc kubenswrapper[4835]: E0216 15:56:28.380684 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:56:31 crc kubenswrapper[4835]: I0216 15:56:31.385811 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:56:31 crc kubenswrapper[4835]: E0216 15:56:31.386354 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:56:34 crc kubenswrapper[4835]: I0216 15:56:34.945219 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj_452942eb-5cd7-495f-88e4-9f5a272569e3/util/0.log" Feb 16 15:56:35 crc kubenswrapper[4835]: I0216 15:56:35.356220 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj_452942eb-5cd7-495f-88e4-9f5a272569e3/util/0.log" Feb 16 15:56:35 crc kubenswrapper[4835]: I0216 15:56:35.357243 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj_452942eb-5cd7-495f-88e4-9f5a272569e3/pull/0.log" Feb 16 15:56:35 crc kubenswrapper[4835]: I0216 15:56:35.364043 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj_452942eb-5cd7-495f-88e4-9f5a272569e3/pull/0.log" Feb 16 15:56:35 crc kubenswrapper[4835]: I0216 15:56:35.589686 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj_452942eb-5cd7-495f-88e4-9f5a272569e3/util/0.log" Feb 16 15:56:35 crc kubenswrapper[4835]: I0216 15:56:35.605339 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj_452942eb-5cd7-495f-88e4-9f5a272569e3/pull/0.log" Feb 16 15:56:35 crc kubenswrapper[4835]: I0216 15:56:35.619397 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_60861ffdf97ec7dc4f2c1c5cffa3882985aec56c8266167bdcc9f13e985qsnj_452942eb-5cd7-495f-88e4-9f5a272569e3/extract/0.log" Feb 16 15:56:36 crc kubenswrapper[4835]: I0216 15:56:35.993740 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-r658s_c9835f90-5158-4aa7-9cc5-4d3a1e1feb63/manager/0.log" Feb 16 15:56:36 crc kubenswrapper[4835]: I0216 15:56:36.357840 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-dztnx_79d4f74b-f0e3-4a7e-b862-0e8f9d52e69b/manager/0.log" Feb 16 15:56:36 crc kubenswrapper[4835]: I0216 15:56:36.569982 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-zwf6h_cd439265-4f6d-48db-bf6e-3353288aff58/manager/0.log" Feb 16 15:56:36 crc kubenswrapper[4835]: I0216 15:56:36.736476 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-p7bhp_e9548890-1ce7-42a6-a870-ae0727b81a68/manager/0.log" Feb 16 15:56:37 crc kubenswrapper[4835]: I0216 15:56:37.276626 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-724ns_4155bc83-03aa-4e2d-a024-0967569539b4/manager/0.log" Feb 16 15:56:37 crc kubenswrapper[4835]: I0216 15:56:37.347931 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-w9l9w_68f68635-db35-44a5-8256-cea92b856a61/manager/0.log" Feb 16 15:56:37 crc kubenswrapper[4835]: I0216 15:56:37.379388 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-kwlf2_a7d34cbf-f94d-4170-9937-b0d05d9785e2/manager/0.log" Feb 16 15:56:37 crc kubenswrapper[4835]: I0216 15:56:37.669887 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-xzw5m_18703fe7-0c06-4977-9357-b9eff4ecdeba/manager/0.log" Feb 16 15:56:38 crc kubenswrapper[4835]: I0216 15:56:38.025895 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-4g6lm_4617e695-08f5-496c-92cc-496f6ce85441/manager/0.log" Feb 16 15:56:38 crc kubenswrapper[4835]: I0216 15:56:38.509883 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-ztgc5_af7a1a80-d93a-4587-adaa-dda2c307e344/manager/0.log" Feb 16 15:56:38 crc kubenswrapper[4835]: I0216 15:56:38.546131 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-sf54k_b3a956a8-c3c7-4ca9-b8d5-902e89252e7c/manager/0.log" Feb 16 15:56:38 crc kubenswrapper[4835]: I0216 15:56:38.929358 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-v56sr_f12b307e-157c-4aa1-91a5-2d55f2fa7def/manager/0.log" Feb 16 15:56:39 crc kubenswrapper[4835]: I0216 15:56:39.385343 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9c6vdbr_2669aead-589f-4383-af0f-abea4a49f6fd/manager/0.log" Feb 16 15:56:39 crc kubenswrapper[4835]: I0216 15:56:39.892560 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5857b4c744-wcnkx_ab3e43ea-05f5-400b-ba5e-95d106b9697a/operator/0.log" Feb 16 15:56:40 crc kubenswrapper[4835]: I0216 15:56:40.101707 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-td85r_eb1bcd66-4bdb-42e3-be22-bf9752941ecc/registry-server/0.log" Feb 16 15:56:40 crc kubenswrapper[4835]: I0216 15:56:40.398270 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-52427_430b904f-ff3b-4b49-a212-7affb09621ef/manager/0.log" Feb 16 15:56:40 crc kubenswrapper[4835]: I0216 15:56:40.654401 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-g5cw4_40e93ce0-f7ad-4fc7-ab2d-5fcf556bd256/manager/0.log" Feb 16 15:56:40 crc kubenswrapper[4835]: I0216 15:56:40.809631 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-4bhnk_602f50b2-0fce-474b-a6ca-cbcdeaa8ff9e/manager/0.log" Feb 16 15:56:40 crc kubenswrapper[4835]: I0216 15:56:40.877436 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-29wr6_e8e216ae-88d9-42e2-a387-b264904e7e20/operator/0.log" Feb 16 15:56:41 crc kubenswrapper[4835]: I0216 15:56:41.002473 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-54dd757795-lpkxx_1de323fb-2bae-44d3-a31a-07b0f2d9d53b/manager/0.log" Feb 16 15:56:41 crc kubenswrapper[4835]: I0216 15:56:41.060040 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-n52fd_e8ca5fe8-55ed-40a0-987e-59face1c1a19/manager/0.log" Feb 16 15:56:41 crc kubenswrapper[4835]: I0216 15:56:41.259283 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-mx5lg_24f969bf-0ff8-4e85-a388-bde2f6ad68bb/manager/0.log" Feb 16 15:56:41 crc kubenswrapper[4835]: I0216 15:56:41.416204 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-2k2dp_76535411-ec0f-4b41-9d6e-084d72e4deec/manager/0.log" Feb 16 15:56:41 crc kubenswrapper[4835]: I0216 15:56:41.742232 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5884f785c-68ssq_91557fa8-9b53-43ea-b9bb-13117ee5d714/manager/0.log" Feb 16 15:56:42 crc kubenswrapper[4835]: E0216 15:56:42.381420 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:56:42 crc kubenswrapper[4835]: I0216 15:56:42.950352 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-cljs5_fa7014ac-eeb8-4e1e-a3d3-e852d2b6c765/manager/0.log" Feb 16 15:56:43 crc kubenswrapper[4835]: I0216 15:56:43.379432 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:56:43 crc kubenswrapper[4835]: E0216 15:56:43.379893 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:56:54 crc kubenswrapper[4835]: I0216 15:56:54.384122 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:56:54 crc kubenswrapper[4835]: E0216 15:56:54.487070 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:56:54 crc kubenswrapper[4835]: E0216 15:56:54.487141 4835 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:56:54 crc kubenswrapper[4835]: E0216 15:56:54.487314 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqdtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-sgzmb_openstack(3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:56:54 crc kubenswrapper[4835]: E0216 15:56:54.488506 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:56:56 crc kubenswrapper[4835]: I0216 15:56:56.379052 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:56:56 crc kubenswrapper[4835]: E0216 15:56:56.379810 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:57:00 crc kubenswrapper[4835]: I0216 15:57:00.564384 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xpgbr_acf9bac3-c5bc-4294-83f0-1e52c261baa3/control-plane-machine-set-operator/0.log" Feb 16 15:57:00 crc kubenswrapper[4835]: I0216 15:57:00.719085 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-f6rdr_601d76ac-3e65-4ef1-9291-cd0e647ab37a/kube-rbac-proxy/0.log" Feb 16 15:57:00 crc kubenswrapper[4835]: I0216 15:57:00.749884 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-f6rdr_601d76ac-3e65-4ef1-9291-cd0e647ab37a/machine-api-operator/0.log" Feb 16 15:57:09 crc kubenswrapper[4835]: I0216 15:57:09.378755 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:57:09 crc kubenswrapper[4835]: E0216 15:57:09.379588 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:57:09 crc kubenswrapper[4835]: E0216 15:57:09.381442 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:57:12 crc kubenswrapper[4835]: I0216 15:57:12.816290 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qhxb8_845954bd-7996-428a-8b39-9746616e7e1e/cert-manager-controller/0.log" Feb 16 15:57:12 crc kubenswrapper[4835]: I0216 15:57:12.984833 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-mnqkl_74f1c44a-b017-4269-96c0-9dc9359becef/cert-manager-cainjector/0.log" Feb 16 15:57:13 crc kubenswrapper[4835]: I0216 15:57:13.031831 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-z9xgl_b37e2863-cd3c-45b3-b774-ad506f0abeef/cert-manager-webhook/0.log" Feb 16 15:57:23 crc kubenswrapper[4835]: E0216 15:57:23.380228 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:57:24 crc kubenswrapper[4835]: I0216 15:57:24.378563 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:57:24 crc kubenswrapper[4835]: E0216 15:57:24.379196 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:57:27 crc kubenswrapper[4835]: I0216 15:57:27.006768 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-5zd7d_7e4599b8-b807-4433-bd32-b134460e028a/nmstate-console-plugin/0.log" Feb 16 15:57:27 crc kubenswrapper[4835]: I0216 15:57:27.167643 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-bm4wc_96d2b8c3-0eac-405b-a1a9-81230ea1a601/nmstate-handler/0.log" Feb 16 15:57:27 crc kubenswrapper[4835]: I0216 15:57:27.218829 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-gx9lk_9d8159c8-4992-44d0-a520-113c6b5a6d15/kube-rbac-proxy/0.log" Feb 16 15:57:27 crc kubenswrapper[4835]: I0216 15:57:27.321418 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-gx9lk_9d8159c8-4992-44d0-a520-113c6b5a6d15/nmstate-metrics/0.log" Feb 16 15:57:27 crc kubenswrapper[4835]: I0216 15:57:27.405320 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-ndjz7_c98c3fc0-ce33-4bef-9089-1dd9da5100a1/nmstate-operator/0.log" Feb 16 15:57:27 crc kubenswrapper[4835]: I0216 15:57:27.522677 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-p8bnb_0d8011f2-2374-49ae-8d2c-f86f2754d7c3/nmstate-webhook/0.log" Feb 16 15:57:37 crc kubenswrapper[4835]: I0216 15:57:37.379360 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:57:37 crc kubenswrapper[4835]: E0216 15:57:37.380683 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:57:38 crc kubenswrapper[4835]: E0216 15:57:38.380981 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:57:40 crc kubenswrapper[4835]: I0216 15:57:40.132907 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5b666486b-bbgfm_ecd1cee6-74c4-493f-9a48-c5e5cecb7cde/manager/0.log" Feb 16 15:57:40 crc kubenswrapper[4835]: I0216 15:57:40.142411 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5b666486b-bbgfm_ecd1cee6-74c4-493f-9a48-c5e5cecb7cde/kube-rbac-proxy/0.log" Feb 16 15:57:50 crc kubenswrapper[4835]: I0216 15:57:50.379857 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:57:50 crc kubenswrapper[4835]: E0216 15:57:50.380920 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:57:50 crc kubenswrapper[4835]: E0216 15:57:50.382195 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:57:52 crc kubenswrapper[4835]: I0216 15:57:52.263387 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zdcgz_5176642c-2ed1-4ed0-bdb8-38863827e4db/prometheus-operator/0.log" Feb 16 15:57:52 crc kubenswrapper[4835]: I0216 15:57:52.427981 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_5b1bd882-cd0a-4194-8c67-fe43261fb379/prometheus-operator-admission-webhook/0.log" Feb 16 15:57:52 crc kubenswrapper[4835]: I0216 15:57:52.463943 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56b699949c-qs52t_5c645ff2-7682-4ecd-8f33-112527a557ae/prometheus-operator-admission-webhook/0.log" Feb 16 15:57:52 crc kubenswrapper[4835]: I0216 15:57:52.638464 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tgbsg_8ab77924-eead-4baa-bad7-82def29f30c8/operator/0.log" Feb 16 15:57:52 crc kubenswrapper[4835]: I0216 15:57:52.659307 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-rn857_efb00222-e09d-4776-9026-91280c520e73/perses-operator/0.log" Feb 16 15:58:03 crc kubenswrapper[4835]: I0216 15:58:03.379430 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:58:03 crc kubenswrapper[4835]: E0216 15:58:03.380349 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:58:05 crc kubenswrapper[4835]: E0216 15:58:05.381024 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:58:05 crc kubenswrapper[4835]: I0216 15:58:05.476755 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-f58q9_0dee595f-4a5b-4986-838c-01782210cb69/kube-rbac-proxy/0.log" Feb 16 15:58:05 crc kubenswrapper[4835]: I0216 15:58:05.581228 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-f58q9_0dee595f-4a5b-4986-838c-01782210cb69/controller/0.log" Feb 16 15:58:05 crc kubenswrapper[4835]: I0216 15:58:05.648306 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-b84c9_9006758e-7767-40e8-a854-d5daeb3d7a2c/frr-k8s-webhook-server/0.log" Feb 16 15:58:05 crc kubenswrapper[4835]: I0216 15:58:05.754935 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/cp-frr-files/0.log" Feb 16 15:58:05 crc kubenswrapper[4835]: I0216 15:58:05.953342 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/cp-frr-files/0.log" Feb 16 15:58:05 crc kubenswrapper[4835]: I0216 15:58:05.986820 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/cp-reloader/0.log" Feb 16 15:58:05 crc kubenswrapper[4835]: I0216 15:58:05.993030 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/cp-reloader/0.log" Feb 16 15:58:06 crc kubenswrapper[4835]: I0216 15:58:06.006417 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/cp-metrics/0.log" Feb 16 15:58:06 crc kubenswrapper[4835]: I0216 15:58:06.160794 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/cp-frr-files/0.log" Feb 16 15:58:06 crc kubenswrapper[4835]: I0216 15:58:06.171294 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/cp-metrics/0.log" Feb 16 15:58:06 crc kubenswrapper[4835]: I0216 15:58:06.177510 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/cp-metrics/0.log" Feb 16 15:58:06 crc kubenswrapper[4835]: I0216 15:58:06.183674 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/cp-reloader/0.log" Feb 16 15:58:06 crc kubenswrapper[4835]: I0216 15:58:06.363913 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/cp-frr-files/0.log" Feb 16 15:58:06 crc kubenswrapper[4835]: I0216 15:58:06.381490 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/cp-reloader/0.log" Feb 16 15:58:06 crc kubenswrapper[4835]: I0216 15:58:06.400490 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/cp-metrics/0.log" Feb 16 15:58:06 crc kubenswrapper[4835]: I0216 15:58:06.417385 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/controller/0.log" Feb 16 15:58:06 crc kubenswrapper[4835]: I0216 15:58:06.520166 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/frr-metrics/0.log" Feb 16 15:58:06 crc kubenswrapper[4835]: I0216 15:58:06.567335 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/kube-rbac-proxy/0.log" Feb 16 15:58:06 crc kubenswrapper[4835]: I0216 15:58:06.619952 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/kube-rbac-proxy-frr/0.log" Feb 16 15:58:06 crc kubenswrapper[4835]: I0216 15:58:06.740550 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/reloader/0.log" Feb 16 15:58:06 crc kubenswrapper[4835]: I0216 15:58:06.871849 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5dccc995f8-z6x79_4aa6fafd-b26a-4c22-b680-7123fabb665e/manager/0.log" Feb 16 15:58:07 crc kubenswrapper[4835]: I0216 15:58:07.012831 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-58dccf7f7b-krtpx_1d8c24f5-95ab-48d4-9007-75a10c8e743a/webhook-server/0.log" Feb 16 15:58:07 crc kubenswrapper[4835]: I0216 15:58:07.220261 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nwtq4_f1616308-8325-4a28-87e2-c72ed44cc83c/kube-rbac-proxy/0.log" Feb 16 15:58:07 crc kubenswrapper[4835]: I0216 15:58:07.706501 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nwtq4_f1616308-8325-4a28-87e2-c72ed44cc83c/speaker/0.log" Feb 16 15:58:07 crc kubenswrapper[4835]: I0216 15:58:07.829022 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wh6l7_cc5ce011-3151-4f6d-98d7-b20df83ff8b3/frr/0.log" Feb 16 15:58:18 crc kubenswrapper[4835]: I0216 15:58:18.378928 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:58:18 crc kubenswrapper[4835]: E0216 15:58:18.379632 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:58:19 crc kubenswrapper[4835]: E0216 15:58:19.381651 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:58:19 crc kubenswrapper[4835]: I0216 15:58:19.869503 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z_21de0e14-792d-4a0f-9a1b-6303df1eac87/util/0.log" Feb 16 15:58:20 crc kubenswrapper[4835]: I0216 15:58:20.060381 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z_21de0e14-792d-4a0f-9a1b-6303df1eac87/util/0.log" Feb 16 15:58:20 crc kubenswrapper[4835]: I0216 15:58:20.068581 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z_21de0e14-792d-4a0f-9a1b-6303df1eac87/pull/0.log" Feb 16 15:58:20 crc kubenswrapper[4835]: I0216 15:58:20.084680 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z_21de0e14-792d-4a0f-9a1b-6303df1eac87/pull/0.log" Feb 16 15:58:20 crc kubenswrapper[4835]: I0216 15:58:20.233871 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z_21de0e14-792d-4a0f-9a1b-6303df1eac87/util/0.log" Feb 16 15:58:20 crc kubenswrapper[4835]: I0216 15:58:20.244118 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z_21de0e14-792d-4a0f-9a1b-6303df1eac87/extract/0.log" Feb 16 15:58:20 crc kubenswrapper[4835]: I0216 15:58:20.261677 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651jm79z_21de0e14-792d-4a0f-9a1b-6303df1eac87/pull/0.log" Feb 16 15:58:20 crc kubenswrapper[4835]: I0216 15:58:20.407721 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq_b5e608e7-9be5-4109-afef-3f02146e5dbb/util/0.log" Feb 16 15:58:20 crc kubenswrapper[4835]: I0216 15:58:20.585605 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq_b5e608e7-9be5-4109-afef-3f02146e5dbb/pull/0.log" Feb 16 15:58:20 crc kubenswrapper[4835]: I0216 15:58:20.586941 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq_b5e608e7-9be5-4109-afef-3f02146e5dbb/pull/0.log" Feb 16 15:58:20 crc kubenswrapper[4835]: I0216 15:58:20.587761 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq_b5e608e7-9be5-4109-afef-3f02146e5dbb/util/0.log" Feb 16 15:58:20 crc kubenswrapper[4835]: I0216 15:58:20.743300 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq_b5e608e7-9be5-4109-afef-3f02146e5dbb/pull/0.log" Feb 16 15:58:20 crc kubenswrapper[4835]: I0216 15:58:20.750597 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq_b5e608e7-9be5-4109-afef-3f02146e5dbb/extract/0.log" Feb 16 15:58:20 crc kubenswrapper[4835]: I0216 15:58:20.854086 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bbvq_b5e608e7-9be5-4109-afef-3f02146e5dbb/util/0.log" Feb 16 15:58:20 crc kubenswrapper[4835]: I0216 15:58:20.952948 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22_54746025-5068-4cc5-ba0a-a24755a67627/util/0.log" Feb 16 15:58:21 crc kubenswrapper[4835]: I0216 15:58:21.136642 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22_54746025-5068-4cc5-ba0a-a24755a67627/util/0.log" Feb 16 15:58:21 crc kubenswrapper[4835]: I0216 15:58:21.138402 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22_54746025-5068-4cc5-ba0a-a24755a67627/pull/0.log" Feb 16 15:58:21 crc kubenswrapper[4835]: I0216 15:58:21.145053 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22_54746025-5068-4cc5-ba0a-a24755a67627/pull/0.log" Feb 16 15:58:21 crc kubenswrapper[4835]: I0216 15:58:21.328276 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22_54746025-5068-4cc5-ba0a-a24755a67627/util/0.log" Feb 16 15:58:21 crc kubenswrapper[4835]: I0216 15:58:21.331838 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22_54746025-5068-4cc5-ba0a-a24755a67627/pull/0.log" Feb 16 15:58:21 crc kubenswrapper[4835]: I0216 15:58:21.337284 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2137sr22_54746025-5068-4cc5-ba0a-a24755a67627/extract/0.log" Feb 16 15:58:21 crc kubenswrapper[4835]: I0216 15:58:21.493210 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zx6gk_0fa97f53-b6fe-497f-b2b4-fded6b7a9285/extract-utilities/0.log" Feb 16 15:58:21 crc kubenswrapper[4835]: I0216 15:58:21.677313 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zx6gk_0fa97f53-b6fe-497f-b2b4-fded6b7a9285/extract-content/0.log" Feb 16 15:58:21 crc kubenswrapper[4835]: I0216 15:58:21.678909 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zx6gk_0fa97f53-b6fe-497f-b2b4-fded6b7a9285/extract-content/0.log" Feb 16 15:58:21 crc kubenswrapper[4835]: I0216 15:58:21.681216 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zx6gk_0fa97f53-b6fe-497f-b2b4-fded6b7a9285/extract-utilities/0.log" Feb 16 15:58:21 crc kubenswrapper[4835]: I0216 15:58:21.881478 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zx6gk_0fa97f53-b6fe-497f-b2b4-fded6b7a9285/extract-utilities/0.log" Feb 16 15:58:21 crc kubenswrapper[4835]: I0216 15:58:21.934351 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zx6gk_0fa97f53-b6fe-497f-b2b4-fded6b7a9285/extract-content/0.log" Feb 16 15:58:22 crc kubenswrapper[4835]: I0216 15:58:22.181473 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4qwnq_c174b939-7427-41a4-8178-89525fb0186d/extract-utilities/0.log" Feb 16 15:58:22 crc kubenswrapper[4835]: I0216 15:58:22.228708 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zx6gk_0fa97f53-b6fe-497f-b2b4-fded6b7a9285/registry-server/0.log" Feb 16 15:58:22 crc kubenswrapper[4835]: I0216 15:58:22.346430 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4qwnq_c174b939-7427-41a4-8178-89525fb0186d/extract-utilities/0.log" Feb 16 15:58:22 crc kubenswrapper[4835]: I0216 15:58:22.373720 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4qwnq_c174b939-7427-41a4-8178-89525fb0186d/extract-content/0.log" Feb 16 15:58:22 crc kubenswrapper[4835]: I0216 15:58:22.407319 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4qwnq_c174b939-7427-41a4-8178-89525fb0186d/extract-content/0.log" Feb 16 15:58:22 crc kubenswrapper[4835]: I0216 15:58:22.587250 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4qwnq_c174b939-7427-41a4-8178-89525fb0186d/extract-content/0.log" Feb 16 15:58:22 crc kubenswrapper[4835]: I0216 15:58:22.593778 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4qwnq_c174b939-7427-41a4-8178-89525fb0186d/extract-utilities/0.log" Feb 16 15:58:22 crc kubenswrapper[4835]: I0216 15:58:22.818070 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s_17e06f1c-269d-46fe-aec8-1791239a585a/util/0.log" Feb 16 15:58:22 crc kubenswrapper[4835]: I0216 15:58:22.987593 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4qwnq_c174b939-7427-41a4-8178-89525fb0186d/registry-server/0.log" Feb 16 15:58:23 crc kubenswrapper[4835]: I0216 15:58:23.019517 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s_17e06f1c-269d-46fe-aec8-1791239a585a/pull/0.log" Feb 16 15:58:23 crc kubenswrapper[4835]: I0216 15:58:23.024923 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s_17e06f1c-269d-46fe-aec8-1791239a585a/util/0.log" Feb 16 15:58:23 crc kubenswrapper[4835]: I0216 15:58:23.066544 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s_17e06f1c-269d-46fe-aec8-1791239a585a/pull/0.log" Feb 16 15:58:23 crc kubenswrapper[4835]: I0216 15:58:23.204456 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s_17e06f1c-269d-46fe-aec8-1791239a585a/extract/0.log" Feb 16 15:58:23 crc kubenswrapper[4835]: I0216 15:58:23.205725 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s_17e06f1c-269d-46fe-aec8-1791239a585a/util/0.log" Feb 16 15:58:23 crc kubenswrapper[4835]: I0216 15:58:23.206001 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6492s_17e06f1c-269d-46fe-aec8-1791239a585a/pull/0.log" Feb 16 15:58:23 crc kubenswrapper[4835]: I0216 15:58:23.398187 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-42xkj_f71e123f-9ed2-44f0-806c-888cd24c0c54/extract-utilities/0.log" Feb 16 15:58:23 crc kubenswrapper[4835]: I0216 15:58:23.413560 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-l28fs_bbc62417-6a2f-4620-acfa-7c2fac9e4c42/marketplace-operator/0.log" Feb 16 15:58:23 crc kubenswrapper[4835]: I0216 15:58:23.560328 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-42xkj_f71e123f-9ed2-44f0-806c-888cd24c0c54/extract-content/0.log" Feb 16 15:58:23 crc kubenswrapper[4835]: I0216 15:58:23.582485 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-42xkj_f71e123f-9ed2-44f0-806c-888cd24c0c54/extract-utilities/0.log" Feb 16 15:58:23 crc kubenswrapper[4835]: I0216 15:58:23.591276 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-42xkj_f71e123f-9ed2-44f0-806c-888cd24c0c54/extract-content/0.log" Feb 16 15:58:23 crc kubenswrapper[4835]: I0216 15:58:23.749446 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-42xkj_f71e123f-9ed2-44f0-806c-888cd24c0c54/extract-utilities/0.log" Feb 16 15:58:23 crc kubenswrapper[4835]: I0216 15:58:23.795446 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-42xkj_f71e123f-9ed2-44f0-806c-888cd24c0c54/extract-content/0.log" Feb 16 15:58:23 crc kubenswrapper[4835]: I0216 15:58:23.845464 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-42xkj_f71e123f-9ed2-44f0-806c-888cd24c0c54/registry-server/0.log" Feb 16 15:58:23 crc kubenswrapper[4835]: I0216 15:58:23.845770 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4252j_f58c4632-6f14-420c-b220-e362cfbf7208/extract-utilities/0.log" Feb 16 15:58:23 crc kubenswrapper[4835]: I0216 15:58:23.995385 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4252j_f58c4632-6f14-420c-b220-e362cfbf7208/extract-utilities/0.log" Feb 16 15:58:24 crc kubenswrapper[4835]: I0216 15:58:24.018654 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4252j_f58c4632-6f14-420c-b220-e362cfbf7208/extract-content/0.log" Feb 16 15:58:24 crc kubenswrapper[4835]: I0216 15:58:24.022158 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4252j_f58c4632-6f14-420c-b220-e362cfbf7208/extract-content/0.log" Feb 16 15:58:24 crc kubenswrapper[4835]: I0216 15:58:24.161613 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4252j_f58c4632-6f14-420c-b220-e362cfbf7208/extract-utilities/0.log" Feb 16 15:58:24 crc kubenswrapper[4835]: I0216 15:58:24.198668 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4252j_f58c4632-6f14-420c-b220-e362cfbf7208/extract-content/0.log" Feb 16 15:58:24 crc kubenswrapper[4835]: I0216 15:58:24.567399 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4252j_f58c4632-6f14-420c-b220-e362cfbf7208/registry-server/0.log" Feb 16 15:58:29 crc kubenswrapper[4835]: I0216 15:58:29.379242 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:58:29 crc kubenswrapper[4835]: E0216 15:58:29.380176 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:58:34 crc kubenswrapper[4835]: E0216 15:58:34.380661 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:58:36 crc kubenswrapper[4835]: I0216 15:58:36.374498 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56b699949c-pvlgm_5b1bd882-cd0a-4194-8c67-fe43261fb379/prometheus-operator-admission-webhook/0.log" Feb 16 15:58:36 crc kubenswrapper[4835]: I0216 15:58:36.442227 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56b699949c-qs52t_5c645ff2-7682-4ecd-8f33-112527a557ae/prometheus-operator-admission-webhook/0.log" Feb 16 15:58:36 crc kubenswrapper[4835]: I0216 15:58:36.453926 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zdcgz_5176642c-2ed1-4ed0-bdb8-38863827e4db/prometheus-operator/0.log" Feb 16 15:58:36 crc kubenswrapper[4835]: I0216 15:58:36.643203 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tgbsg_8ab77924-eead-4baa-bad7-82def29f30c8/operator/0.log" Feb 16 15:58:36 crc kubenswrapper[4835]: I0216 15:58:36.651403 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-rn857_efb00222-e09d-4776-9026-91280c520e73/perses-operator/0.log" Feb 16 15:58:40 crc kubenswrapper[4835]: I0216 15:58:40.181937 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jt98x"] Feb 16 15:58:40 crc kubenswrapper[4835]: E0216 15:58:40.182965 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9066763-e663-4075-a464-5c2eeb29e187" containerName="container-00" Feb 16 15:58:40 crc kubenswrapper[4835]: I0216 15:58:40.182983 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9066763-e663-4075-a464-5c2eeb29e187" containerName="container-00" Feb 16 15:58:40 crc kubenswrapper[4835]: I0216 15:58:40.183261 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9066763-e663-4075-a464-5c2eeb29e187" containerName="container-00" Feb 16 15:58:40 crc kubenswrapper[4835]: I0216 15:58:40.185067 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jt98x" Feb 16 15:58:40 crc kubenswrapper[4835]: I0216 15:58:40.194242 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1890aa8-85f1-4713-8d8f-e755adb6ea59-catalog-content\") pod \"redhat-marketplace-jt98x\" (UID: \"b1890aa8-85f1-4713-8d8f-e755adb6ea59\") " pod="openshift-marketplace/redhat-marketplace-jt98x" Feb 16 15:58:40 crc kubenswrapper[4835]: I0216 15:58:40.194442 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f2vj\" (UniqueName: \"kubernetes.io/projected/b1890aa8-85f1-4713-8d8f-e755adb6ea59-kube-api-access-5f2vj\") pod \"redhat-marketplace-jt98x\" (UID: \"b1890aa8-85f1-4713-8d8f-e755adb6ea59\") " pod="openshift-marketplace/redhat-marketplace-jt98x" Feb 16 15:58:40 crc kubenswrapper[4835]: I0216 15:58:40.194667 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1890aa8-85f1-4713-8d8f-e755adb6ea59-utilities\") pod \"redhat-marketplace-jt98x\" (UID: \"b1890aa8-85f1-4713-8d8f-e755adb6ea59\") " pod="openshift-marketplace/redhat-marketplace-jt98x" Feb 16 15:58:40 crc kubenswrapper[4835]: I0216 15:58:40.206129 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jt98x"] Feb 16 15:58:40 crc kubenswrapper[4835]: I0216 15:58:40.296264 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f2vj\" (UniqueName: \"kubernetes.io/projected/b1890aa8-85f1-4713-8d8f-e755adb6ea59-kube-api-access-5f2vj\") pod \"redhat-marketplace-jt98x\" (UID: \"b1890aa8-85f1-4713-8d8f-e755adb6ea59\") " pod="openshift-marketplace/redhat-marketplace-jt98x" Feb 16 15:58:40 crc kubenswrapper[4835]: I0216 15:58:40.296819 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1890aa8-85f1-4713-8d8f-e755adb6ea59-utilities\") pod \"redhat-marketplace-jt98x\" (UID: \"b1890aa8-85f1-4713-8d8f-e755adb6ea59\") " pod="openshift-marketplace/redhat-marketplace-jt98x" Feb 16 15:58:40 crc kubenswrapper[4835]: I0216 15:58:40.296958 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1890aa8-85f1-4713-8d8f-e755adb6ea59-catalog-content\") pod \"redhat-marketplace-jt98x\" (UID: \"b1890aa8-85f1-4713-8d8f-e755adb6ea59\") " pod="openshift-marketplace/redhat-marketplace-jt98x" Feb 16 15:58:40 crc kubenswrapper[4835]: I0216 15:58:40.297326 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1890aa8-85f1-4713-8d8f-e755adb6ea59-catalog-content\") pod \"redhat-marketplace-jt98x\" (UID: \"b1890aa8-85f1-4713-8d8f-e755adb6ea59\") " pod="openshift-marketplace/redhat-marketplace-jt98x" Feb 16 15:58:40 crc kubenswrapper[4835]: I0216 15:58:40.297331 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1890aa8-85f1-4713-8d8f-e755adb6ea59-utilities\") pod \"redhat-marketplace-jt98x\" (UID: \"b1890aa8-85f1-4713-8d8f-e755adb6ea59\") " pod="openshift-marketplace/redhat-marketplace-jt98x" Feb 16 15:58:40 crc kubenswrapper[4835]: I0216 15:58:40.316973 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f2vj\" (UniqueName: \"kubernetes.io/projected/b1890aa8-85f1-4713-8d8f-e755adb6ea59-kube-api-access-5f2vj\") pod \"redhat-marketplace-jt98x\" (UID: \"b1890aa8-85f1-4713-8d8f-e755adb6ea59\") " pod="openshift-marketplace/redhat-marketplace-jt98x" Feb 16 15:58:40 crc kubenswrapper[4835]: I0216 15:58:40.378818 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:58:40 crc kubenswrapper[4835]: E0216 15:58:40.379145 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:58:40 crc kubenswrapper[4835]: I0216 15:58:40.514926 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jt98x" Feb 16 15:58:41 crc kubenswrapper[4835]: I0216 15:58:41.022046 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jt98x"] Feb 16 15:58:41 crc kubenswrapper[4835]: I0216 15:58:41.127903 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt98x" event={"ID":"b1890aa8-85f1-4713-8d8f-e755adb6ea59","Type":"ContainerStarted","Data":"38d81136189da024a18c9d659a0140c372dc919ea2eda627322a258980e18554"} Feb 16 15:58:42 crc kubenswrapper[4835]: I0216 15:58:42.138417 4835 generic.go:334] "Generic (PLEG): container finished" podID="b1890aa8-85f1-4713-8d8f-e755adb6ea59" containerID="193eecef0dc2f7be6a543087d43383a9e1004201835a8306dee6c66d6bdc5181" exitCode=0 Feb 16 15:58:42 crc kubenswrapper[4835]: I0216 15:58:42.138518 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt98x" event={"ID":"b1890aa8-85f1-4713-8d8f-e755adb6ea59","Type":"ContainerDied","Data":"193eecef0dc2f7be6a543087d43383a9e1004201835a8306dee6c66d6bdc5181"} Feb 16 15:58:44 crc kubenswrapper[4835]: I0216 15:58:44.159503 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt98x" event={"ID":"b1890aa8-85f1-4713-8d8f-e755adb6ea59","Type":"ContainerStarted","Data":"60a0e94037f056e9339dc4bc030bfb75341c4b7af66e0beab75561e3d215e280"} Feb 16 15:58:45 crc kubenswrapper[4835]: I0216 15:58:45.169050 4835 generic.go:334] "Generic (PLEG): container finished" podID="b1890aa8-85f1-4713-8d8f-e755adb6ea59" containerID="60a0e94037f056e9339dc4bc030bfb75341c4b7af66e0beab75561e3d215e280" exitCode=0 Feb 16 15:58:45 crc kubenswrapper[4835]: I0216 15:58:45.169096 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt98x" event={"ID":"b1890aa8-85f1-4713-8d8f-e755adb6ea59","Type":"ContainerDied","Data":"60a0e94037f056e9339dc4bc030bfb75341c4b7af66e0beab75561e3d215e280"} Feb 16 15:58:45 crc kubenswrapper[4835]: E0216 15:58:45.379625 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:58:46 crc kubenswrapper[4835]: I0216 15:58:46.196222 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt98x" event={"ID":"b1890aa8-85f1-4713-8d8f-e755adb6ea59","Type":"ContainerStarted","Data":"ff2cc182203ff51f7110d6a5d78cc075e141ed8992018427b91918880380530e"} Feb 16 15:58:46 crc kubenswrapper[4835]: I0216 15:58:46.219359 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jt98x" podStartSLOduration=2.776107761 podStartE2EDuration="6.219346774s" podCreationTimestamp="2026-02-16 15:58:40 +0000 UTC" firstStartedPulling="2026-02-16 15:58:42.140134074 +0000 UTC m=+3071.432126969" lastFinishedPulling="2026-02-16 15:58:45.583373077 +0000 UTC m=+3074.875365982" observedRunningTime="2026-02-16 15:58:46.215488193 +0000 UTC m=+3075.507481088" watchObservedRunningTime="2026-02-16 15:58:46.219346774 +0000 UTC m=+3075.511339669" Feb 16 15:58:50 crc kubenswrapper[4835]: I0216 15:58:50.516045 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jt98x" Feb 16 15:58:50 crc kubenswrapper[4835]: I0216 15:58:50.516391 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jt98x" Feb 16 15:58:50 crc kubenswrapper[4835]: I0216 15:58:50.578705 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jt98x" Feb 16 15:58:50 crc kubenswrapper[4835]: I0216 15:58:50.786934 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5b666486b-bbgfm_ecd1cee6-74c4-493f-9a48-c5e5cecb7cde/kube-rbac-proxy/0.log" Feb 16 15:58:50 crc kubenswrapper[4835]: I0216 15:58:50.861083 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5b666486b-bbgfm_ecd1cee6-74c4-493f-9a48-c5e5cecb7cde/manager/0.log" Feb 16 15:58:51 crc kubenswrapper[4835]: I0216 15:58:51.285100 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jt98x" Feb 16 15:58:51 crc kubenswrapper[4835]: I0216 15:58:51.329180 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jt98x"] Feb 16 15:58:51 crc kubenswrapper[4835]: I0216 15:58:51.383949 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:58:51 crc kubenswrapper[4835]: E0216 15:58:51.384206 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:58:53 crc kubenswrapper[4835]: I0216 15:58:53.255836 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jt98x" podUID="b1890aa8-85f1-4713-8d8f-e755adb6ea59" containerName="registry-server" containerID="cri-o://ff2cc182203ff51f7110d6a5d78cc075e141ed8992018427b91918880380530e" gracePeriod=2 Feb 16 15:58:53 crc kubenswrapper[4835]: I0216 15:58:53.776762 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jt98x" Feb 16 15:58:53 crc kubenswrapper[4835]: I0216 15:58:53.887267 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1890aa8-85f1-4713-8d8f-e755adb6ea59-catalog-content\") pod \"b1890aa8-85f1-4713-8d8f-e755adb6ea59\" (UID: \"b1890aa8-85f1-4713-8d8f-e755adb6ea59\") " Feb 16 15:58:53 crc kubenswrapper[4835]: I0216 15:58:53.887516 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1890aa8-85f1-4713-8d8f-e755adb6ea59-utilities\") pod \"b1890aa8-85f1-4713-8d8f-e755adb6ea59\" (UID: \"b1890aa8-85f1-4713-8d8f-e755adb6ea59\") " Feb 16 15:58:53 crc kubenswrapper[4835]: I0216 15:58:53.887604 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f2vj\" (UniqueName: \"kubernetes.io/projected/b1890aa8-85f1-4713-8d8f-e755adb6ea59-kube-api-access-5f2vj\") pod \"b1890aa8-85f1-4713-8d8f-e755adb6ea59\" (UID: \"b1890aa8-85f1-4713-8d8f-e755adb6ea59\") " Feb 16 15:58:53 crc kubenswrapper[4835]: I0216 15:58:53.889166 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1890aa8-85f1-4713-8d8f-e755adb6ea59-utilities" (OuterVolumeSpecName: "utilities") pod "b1890aa8-85f1-4713-8d8f-e755adb6ea59" (UID: "b1890aa8-85f1-4713-8d8f-e755adb6ea59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:58:53 crc kubenswrapper[4835]: I0216 15:58:53.897773 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1890aa8-85f1-4713-8d8f-e755adb6ea59-kube-api-access-5f2vj" (OuterVolumeSpecName: "kube-api-access-5f2vj") pod "b1890aa8-85f1-4713-8d8f-e755adb6ea59" (UID: "b1890aa8-85f1-4713-8d8f-e755adb6ea59"). InnerVolumeSpecName "kube-api-access-5f2vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:58:53 crc kubenswrapper[4835]: I0216 15:58:53.913356 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1890aa8-85f1-4713-8d8f-e755adb6ea59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1890aa8-85f1-4713-8d8f-e755adb6ea59" (UID: "b1890aa8-85f1-4713-8d8f-e755adb6ea59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:58:53 crc kubenswrapper[4835]: I0216 15:58:53.989651 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1890aa8-85f1-4713-8d8f-e755adb6ea59-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:58:53 crc kubenswrapper[4835]: I0216 15:58:53.989693 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f2vj\" (UniqueName: \"kubernetes.io/projected/b1890aa8-85f1-4713-8d8f-e755adb6ea59-kube-api-access-5f2vj\") on node \"crc\" DevicePath \"\"" Feb 16 15:58:53 crc kubenswrapper[4835]: I0216 15:58:53.989709 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1890aa8-85f1-4713-8d8f-e755adb6ea59-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:58:54 crc kubenswrapper[4835]: I0216 15:58:54.267187 4835 generic.go:334] "Generic (PLEG): container finished" podID="b1890aa8-85f1-4713-8d8f-e755adb6ea59" containerID="ff2cc182203ff51f7110d6a5d78cc075e141ed8992018427b91918880380530e" exitCode=0 Feb 16 15:58:54 crc kubenswrapper[4835]: I0216 15:58:54.267304 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jt98x" Feb 16 15:58:54 crc kubenswrapper[4835]: I0216 15:58:54.267295 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt98x" event={"ID":"b1890aa8-85f1-4713-8d8f-e755adb6ea59","Type":"ContainerDied","Data":"ff2cc182203ff51f7110d6a5d78cc075e141ed8992018427b91918880380530e"} Feb 16 15:58:54 crc kubenswrapper[4835]: I0216 15:58:54.267890 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt98x" event={"ID":"b1890aa8-85f1-4713-8d8f-e755adb6ea59","Type":"ContainerDied","Data":"38d81136189da024a18c9d659a0140c372dc919ea2eda627322a258980e18554"} Feb 16 15:58:54 crc kubenswrapper[4835]: I0216 15:58:54.267942 4835 scope.go:117] "RemoveContainer" containerID="ff2cc182203ff51f7110d6a5d78cc075e141ed8992018427b91918880380530e" Feb 16 15:58:54 crc kubenswrapper[4835]: I0216 15:58:54.315409 4835 scope.go:117] "RemoveContainer" containerID="60a0e94037f056e9339dc4bc030bfb75341c4b7af66e0beab75561e3d215e280" Feb 16 15:58:54 crc kubenswrapper[4835]: I0216 15:58:54.321580 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jt98x"] Feb 16 15:58:54 crc kubenswrapper[4835]: I0216 15:58:54.326712 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jt98x"] Feb 16 15:58:54 crc kubenswrapper[4835]: I0216 15:58:54.338204 4835 scope.go:117] "RemoveContainer" containerID="193eecef0dc2f7be6a543087d43383a9e1004201835a8306dee6c66d6bdc5181" Feb 16 15:58:54 crc kubenswrapper[4835]: I0216 15:58:54.375967 4835 scope.go:117] "RemoveContainer" containerID="ff2cc182203ff51f7110d6a5d78cc075e141ed8992018427b91918880380530e" Feb 16 15:58:54 crc kubenswrapper[4835]: E0216 15:58:54.376364 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff2cc182203ff51f7110d6a5d78cc075e141ed8992018427b91918880380530e\": container with ID starting with ff2cc182203ff51f7110d6a5d78cc075e141ed8992018427b91918880380530e not found: ID does not exist" containerID="ff2cc182203ff51f7110d6a5d78cc075e141ed8992018427b91918880380530e" Feb 16 15:58:54 crc kubenswrapper[4835]: I0216 15:58:54.376469 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff2cc182203ff51f7110d6a5d78cc075e141ed8992018427b91918880380530e"} err="failed to get container status \"ff2cc182203ff51f7110d6a5d78cc075e141ed8992018427b91918880380530e\": rpc error: code = NotFound desc = could not find container \"ff2cc182203ff51f7110d6a5d78cc075e141ed8992018427b91918880380530e\": container with ID starting with ff2cc182203ff51f7110d6a5d78cc075e141ed8992018427b91918880380530e not found: ID does not exist" Feb 16 15:58:54 crc kubenswrapper[4835]: I0216 15:58:54.376629 4835 scope.go:117] "RemoveContainer" containerID="60a0e94037f056e9339dc4bc030bfb75341c4b7af66e0beab75561e3d215e280" Feb 16 15:58:54 crc kubenswrapper[4835]: E0216 15:58:54.379692 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60a0e94037f056e9339dc4bc030bfb75341c4b7af66e0beab75561e3d215e280\": container with ID starting with 60a0e94037f056e9339dc4bc030bfb75341c4b7af66e0beab75561e3d215e280 not found: ID does not exist" containerID="60a0e94037f056e9339dc4bc030bfb75341c4b7af66e0beab75561e3d215e280" Feb 16 15:58:54 crc kubenswrapper[4835]: I0216 15:58:54.379850 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60a0e94037f056e9339dc4bc030bfb75341c4b7af66e0beab75561e3d215e280"} err="failed to get container status \"60a0e94037f056e9339dc4bc030bfb75341c4b7af66e0beab75561e3d215e280\": rpc error: code = NotFound desc = could not find container \"60a0e94037f056e9339dc4bc030bfb75341c4b7af66e0beab75561e3d215e280\": container with ID starting with 60a0e94037f056e9339dc4bc030bfb75341c4b7af66e0beab75561e3d215e280 not found: ID does not exist" Feb 16 15:58:54 crc kubenswrapper[4835]: I0216 15:58:54.379924 4835 scope.go:117] "RemoveContainer" containerID="193eecef0dc2f7be6a543087d43383a9e1004201835a8306dee6c66d6bdc5181" Feb 16 15:58:54 crc kubenswrapper[4835]: E0216 15:58:54.380626 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"193eecef0dc2f7be6a543087d43383a9e1004201835a8306dee6c66d6bdc5181\": container with ID starting with 193eecef0dc2f7be6a543087d43383a9e1004201835a8306dee6c66d6bdc5181 not found: ID does not exist" containerID="193eecef0dc2f7be6a543087d43383a9e1004201835a8306dee6c66d6bdc5181" Feb 16 15:58:54 crc kubenswrapper[4835]: I0216 15:58:54.380719 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"193eecef0dc2f7be6a543087d43383a9e1004201835a8306dee6c66d6bdc5181"} err="failed to get container status \"193eecef0dc2f7be6a543087d43383a9e1004201835a8306dee6c66d6bdc5181\": rpc error: code = NotFound desc = could not find container \"193eecef0dc2f7be6a543087d43383a9e1004201835a8306dee6c66d6bdc5181\": container with ID starting with 193eecef0dc2f7be6a543087d43383a9e1004201835a8306dee6c66d6bdc5181 not found: ID does not exist" Feb 16 15:58:55 crc kubenswrapper[4835]: I0216 15:58:55.396023 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1890aa8-85f1-4713-8d8f-e755adb6ea59" path="/var/lib/kubelet/pods/b1890aa8-85f1-4713-8d8f-e755adb6ea59/volumes" Feb 16 15:58:58 crc kubenswrapper[4835]: E0216 15:58:58.381228 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:59:04 crc kubenswrapper[4835]: I0216 15:59:04.379681 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:59:04 crc kubenswrapper[4835]: E0216 15:59:04.380433 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:59:12 crc kubenswrapper[4835]: E0216 15:59:12.380427 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:59:16 crc kubenswrapper[4835]: I0216 15:59:16.379036 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:59:16 crc kubenswrapper[4835]: E0216 15:59:16.379859 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:59:27 crc kubenswrapper[4835]: E0216 15:59:27.381817 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:59:28 crc kubenswrapper[4835]: I0216 15:59:28.379501 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:59:28 crc kubenswrapper[4835]: E0216 15:59:28.380085 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:59:36 crc kubenswrapper[4835]: I0216 15:59:36.508708 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5hj5h"] Feb 16 15:59:36 crc kubenswrapper[4835]: E0216 15:59:36.509674 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1890aa8-85f1-4713-8d8f-e755adb6ea59" containerName="extract-utilities" Feb 16 15:59:36 crc kubenswrapper[4835]: I0216 15:59:36.509688 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1890aa8-85f1-4713-8d8f-e755adb6ea59" containerName="extract-utilities" Feb 16 15:59:36 crc kubenswrapper[4835]: E0216 15:59:36.509701 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1890aa8-85f1-4713-8d8f-e755adb6ea59" containerName="registry-server" Feb 16 15:59:36 crc kubenswrapper[4835]: I0216 15:59:36.509707 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1890aa8-85f1-4713-8d8f-e755adb6ea59" containerName="registry-server" Feb 16 15:59:36 crc kubenswrapper[4835]: E0216 15:59:36.509740 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1890aa8-85f1-4713-8d8f-e755adb6ea59" containerName="extract-content" Feb 16 15:59:36 crc kubenswrapper[4835]: I0216 15:59:36.509747 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1890aa8-85f1-4713-8d8f-e755adb6ea59" containerName="extract-content" Feb 16 15:59:36 crc kubenswrapper[4835]: I0216 15:59:36.509927 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1890aa8-85f1-4713-8d8f-e755adb6ea59" containerName="registry-server" Feb 16 15:59:36 crc kubenswrapper[4835]: I0216 15:59:36.511507 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hj5h" Feb 16 15:59:36 crc kubenswrapper[4835]: I0216 15:59:36.529280 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5hj5h"] Feb 16 15:59:36 crc kubenswrapper[4835]: I0216 15:59:36.619577 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/205228da-fe44-48b2-b1c5-38bd40c81557-catalog-content\") pod \"certified-operators-5hj5h\" (UID: \"205228da-fe44-48b2-b1c5-38bd40c81557\") " pod="openshift-marketplace/certified-operators-5hj5h" Feb 16 15:59:36 crc kubenswrapper[4835]: I0216 15:59:36.619715 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/205228da-fe44-48b2-b1c5-38bd40c81557-utilities\") pod \"certified-operators-5hj5h\" (UID: \"205228da-fe44-48b2-b1c5-38bd40c81557\") " pod="openshift-marketplace/certified-operators-5hj5h" Feb 16 15:59:36 crc kubenswrapper[4835]: I0216 15:59:36.619791 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5khl\" (UniqueName: \"kubernetes.io/projected/205228da-fe44-48b2-b1c5-38bd40c81557-kube-api-access-z5khl\") pod \"certified-operators-5hj5h\" (UID: \"205228da-fe44-48b2-b1c5-38bd40c81557\") " pod="openshift-marketplace/certified-operators-5hj5h" Feb 16 15:59:36 crc kubenswrapper[4835]: I0216 15:59:36.721967 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/205228da-fe44-48b2-b1c5-38bd40c81557-utilities\") pod \"certified-operators-5hj5h\" (UID: \"205228da-fe44-48b2-b1c5-38bd40c81557\") " pod="openshift-marketplace/certified-operators-5hj5h" Feb 16 15:59:36 crc kubenswrapper[4835]: I0216 15:59:36.722070 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5khl\" (UniqueName: \"kubernetes.io/projected/205228da-fe44-48b2-b1c5-38bd40c81557-kube-api-access-z5khl\") pod \"certified-operators-5hj5h\" (UID: \"205228da-fe44-48b2-b1c5-38bd40c81557\") " pod="openshift-marketplace/certified-operators-5hj5h" Feb 16 15:59:36 crc kubenswrapper[4835]: I0216 15:59:36.722158 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/205228da-fe44-48b2-b1c5-38bd40c81557-catalog-content\") pod \"certified-operators-5hj5h\" (UID: \"205228da-fe44-48b2-b1c5-38bd40c81557\") " pod="openshift-marketplace/certified-operators-5hj5h" Feb 16 15:59:36 crc kubenswrapper[4835]: I0216 15:59:36.722478 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/205228da-fe44-48b2-b1c5-38bd40c81557-utilities\") pod \"certified-operators-5hj5h\" (UID: \"205228da-fe44-48b2-b1c5-38bd40c81557\") " pod="openshift-marketplace/certified-operators-5hj5h" Feb 16 15:59:36 crc kubenswrapper[4835]: I0216 15:59:36.722726 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/205228da-fe44-48b2-b1c5-38bd40c81557-catalog-content\") pod \"certified-operators-5hj5h\" (UID: \"205228da-fe44-48b2-b1c5-38bd40c81557\") " pod="openshift-marketplace/certified-operators-5hj5h" Feb 16 15:59:36 crc kubenswrapper[4835]: I0216 15:59:36.756576 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5khl\" (UniqueName: \"kubernetes.io/projected/205228da-fe44-48b2-b1c5-38bd40c81557-kube-api-access-z5khl\") pod \"certified-operators-5hj5h\" (UID: \"205228da-fe44-48b2-b1c5-38bd40c81557\") " pod="openshift-marketplace/certified-operators-5hj5h" Feb 16 15:59:36 crc kubenswrapper[4835]: I0216 15:59:36.835101 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hj5h" Feb 16 15:59:37 crc kubenswrapper[4835]: I0216 15:59:37.406801 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5hj5h"] Feb 16 15:59:37 crc kubenswrapper[4835]: W0216 15:59:37.414114 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod205228da_fe44_48b2_b1c5_38bd40c81557.slice/crio-3838185188b78a97a1dea469f1e2651c8beae27f2703c530cb8ade8bc13cf2ba WatchSource:0}: Error finding container 3838185188b78a97a1dea469f1e2651c8beae27f2703c530cb8ade8bc13cf2ba: Status 404 returned error can't find the container with id 3838185188b78a97a1dea469f1e2651c8beae27f2703c530cb8ade8bc13cf2ba Feb 16 15:59:37 crc kubenswrapper[4835]: I0216 15:59:37.683048 4835 generic.go:334] "Generic (PLEG): container finished" podID="205228da-fe44-48b2-b1c5-38bd40c81557" containerID="e964023f191d3cd2d6c30226d797a8cefe2f9190875071562187d4f3cd0d5aac" exitCode=0 Feb 16 15:59:37 crc kubenswrapper[4835]: I0216 15:59:37.683103 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hj5h" event={"ID":"205228da-fe44-48b2-b1c5-38bd40c81557","Type":"ContainerDied","Data":"e964023f191d3cd2d6c30226d797a8cefe2f9190875071562187d4f3cd0d5aac"} Feb 16 15:59:37 crc kubenswrapper[4835]: I0216 15:59:37.683402 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hj5h" event={"ID":"205228da-fe44-48b2-b1c5-38bd40c81557","Type":"ContainerStarted","Data":"3838185188b78a97a1dea469f1e2651c8beae27f2703c530cb8ade8bc13cf2ba"} Feb 16 15:59:38 crc kubenswrapper[4835]: I0216 15:59:38.712188 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hj5h" event={"ID":"205228da-fe44-48b2-b1c5-38bd40c81557","Type":"ContainerStarted","Data":"7e53b343a89499dbddb33227a2a81f0a250a92c41ce2cdd132b92bdf0a0b28f3"} Feb 16 15:59:39 crc kubenswrapper[4835]: E0216 15:59:39.381175 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:59:40 crc kubenswrapper[4835]: I0216 15:59:40.378822 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:59:40 crc kubenswrapper[4835]: E0216 15:59:40.379384 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:59:40 crc kubenswrapper[4835]: I0216 15:59:40.729366 4835 generic.go:334] "Generic (PLEG): container finished" podID="205228da-fe44-48b2-b1c5-38bd40c81557" containerID="7e53b343a89499dbddb33227a2a81f0a250a92c41ce2cdd132b92bdf0a0b28f3" exitCode=0 Feb 16 15:59:40 crc kubenswrapper[4835]: I0216 15:59:40.729407 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hj5h" event={"ID":"205228da-fe44-48b2-b1c5-38bd40c81557","Type":"ContainerDied","Data":"7e53b343a89499dbddb33227a2a81f0a250a92c41ce2cdd132b92bdf0a0b28f3"} Feb 16 15:59:41 crc kubenswrapper[4835]: I0216 15:59:41.742186 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hj5h" event={"ID":"205228da-fe44-48b2-b1c5-38bd40c81557","Type":"ContainerStarted","Data":"d99719cdf9bfcf54530753048e33d0431e334d1445fb7e69c34be1511b8fe28a"} Feb 16 15:59:41 crc kubenswrapper[4835]: I0216 15:59:41.777016 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5hj5h" podStartSLOduration=2.314734696 podStartE2EDuration="5.776982984s" podCreationTimestamp="2026-02-16 15:59:36 +0000 UTC" firstStartedPulling="2026-02-16 15:59:37.68762554 +0000 UTC m=+3126.979618445" lastFinishedPulling="2026-02-16 15:59:41.149873838 +0000 UTC m=+3130.441866733" observedRunningTime="2026-02-16 15:59:41.7760644 +0000 UTC m=+3131.068057315" watchObservedRunningTime="2026-02-16 15:59:41.776982984 +0000 UTC m=+3131.068975879" Feb 16 15:59:46 crc kubenswrapper[4835]: I0216 15:59:46.837450 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5hj5h" Feb 16 15:59:46 crc kubenswrapper[4835]: I0216 15:59:46.838068 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5hj5h" Feb 16 15:59:46 crc kubenswrapper[4835]: I0216 15:59:46.903504 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5hj5h" Feb 16 15:59:47 crc kubenswrapper[4835]: I0216 15:59:47.868268 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5hj5h" Feb 16 15:59:47 crc kubenswrapper[4835]: I0216 15:59:47.920705 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5hj5h"] Feb 16 15:59:49 crc kubenswrapper[4835]: I0216 15:59:49.824991 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5hj5h" podUID="205228da-fe44-48b2-b1c5-38bd40c81557" containerName="registry-server" containerID="cri-o://d99719cdf9bfcf54530753048e33d0431e334d1445fb7e69c34be1511b8fe28a" gracePeriod=2 Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.338398 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hj5h" Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.425452 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/205228da-fe44-48b2-b1c5-38bd40c81557-utilities\") pod \"205228da-fe44-48b2-b1c5-38bd40c81557\" (UID: \"205228da-fe44-48b2-b1c5-38bd40c81557\") " Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.425792 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5khl\" (UniqueName: \"kubernetes.io/projected/205228da-fe44-48b2-b1c5-38bd40c81557-kube-api-access-z5khl\") pod \"205228da-fe44-48b2-b1c5-38bd40c81557\" (UID: \"205228da-fe44-48b2-b1c5-38bd40c81557\") " Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.425834 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/205228da-fe44-48b2-b1c5-38bd40c81557-catalog-content\") pod \"205228da-fe44-48b2-b1c5-38bd40c81557\" (UID: \"205228da-fe44-48b2-b1c5-38bd40c81557\") " Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.428270 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/205228da-fe44-48b2-b1c5-38bd40c81557-utilities" (OuterVolumeSpecName: "utilities") pod "205228da-fe44-48b2-b1c5-38bd40c81557" (UID: "205228da-fe44-48b2-b1c5-38bd40c81557"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.433941 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205228da-fe44-48b2-b1c5-38bd40c81557-kube-api-access-z5khl" (OuterVolumeSpecName: "kube-api-access-z5khl") pod "205228da-fe44-48b2-b1c5-38bd40c81557" (UID: "205228da-fe44-48b2-b1c5-38bd40c81557"). InnerVolumeSpecName "kube-api-access-z5khl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.493246 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/205228da-fe44-48b2-b1c5-38bd40c81557-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "205228da-fe44-48b2-b1c5-38bd40c81557" (UID: "205228da-fe44-48b2-b1c5-38bd40c81557"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.529389 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5khl\" (UniqueName: \"kubernetes.io/projected/205228da-fe44-48b2-b1c5-38bd40c81557-kube-api-access-z5khl\") on node \"crc\" DevicePath \"\"" Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.529419 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/205228da-fe44-48b2-b1c5-38bd40c81557-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.529429 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/205228da-fe44-48b2-b1c5-38bd40c81557-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.837204 4835 generic.go:334] "Generic (PLEG): container finished" podID="205228da-fe44-48b2-b1c5-38bd40c81557" containerID="d99719cdf9bfcf54530753048e33d0431e334d1445fb7e69c34be1511b8fe28a" exitCode=0 Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.837253 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hj5h" event={"ID":"205228da-fe44-48b2-b1c5-38bd40c81557","Type":"ContainerDied","Data":"d99719cdf9bfcf54530753048e33d0431e334d1445fb7e69c34be1511b8fe28a"} Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.837284 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hj5h" event={"ID":"205228da-fe44-48b2-b1c5-38bd40c81557","Type":"ContainerDied","Data":"3838185188b78a97a1dea469f1e2651c8beae27f2703c530cb8ade8bc13cf2ba"} Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.837305 4835 scope.go:117] "RemoveContainer" containerID="d99719cdf9bfcf54530753048e33d0431e334d1445fb7e69c34be1511b8fe28a" Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.837311 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hj5h" Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.883978 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5hj5h"] Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.893337 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5hj5h"] Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.895986 4835 scope.go:117] "RemoveContainer" containerID="7e53b343a89499dbddb33227a2a81f0a250a92c41ce2cdd132b92bdf0a0b28f3" Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.914942 4835 scope.go:117] "RemoveContainer" containerID="e964023f191d3cd2d6c30226d797a8cefe2f9190875071562187d4f3cd0d5aac" Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.978503 4835 scope.go:117] "RemoveContainer" containerID="d99719cdf9bfcf54530753048e33d0431e334d1445fb7e69c34be1511b8fe28a" Feb 16 15:59:50 crc kubenswrapper[4835]: E0216 15:59:50.978895 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d99719cdf9bfcf54530753048e33d0431e334d1445fb7e69c34be1511b8fe28a\": container with ID starting with d99719cdf9bfcf54530753048e33d0431e334d1445fb7e69c34be1511b8fe28a not found: ID does not exist" containerID="d99719cdf9bfcf54530753048e33d0431e334d1445fb7e69c34be1511b8fe28a" Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.978922 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99719cdf9bfcf54530753048e33d0431e334d1445fb7e69c34be1511b8fe28a"} err="failed to get container status \"d99719cdf9bfcf54530753048e33d0431e334d1445fb7e69c34be1511b8fe28a\": rpc error: code = NotFound desc = could not find container \"d99719cdf9bfcf54530753048e33d0431e334d1445fb7e69c34be1511b8fe28a\": container with ID starting with d99719cdf9bfcf54530753048e33d0431e334d1445fb7e69c34be1511b8fe28a not found: ID does not exist" Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.978941 4835 scope.go:117] "RemoveContainer" containerID="7e53b343a89499dbddb33227a2a81f0a250a92c41ce2cdd132b92bdf0a0b28f3" Feb 16 15:59:50 crc kubenswrapper[4835]: E0216 15:59:50.979102 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e53b343a89499dbddb33227a2a81f0a250a92c41ce2cdd132b92bdf0a0b28f3\": container with ID starting with 7e53b343a89499dbddb33227a2a81f0a250a92c41ce2cdd132b92bdf0a0b28f3 not found: ID does not exist" containerID="7e53b343a89499dbddb33227a2a81f0a250a92c41ce2cdd132b92bdf0a0b28f3" Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.979121 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e53b343a89499dbddb33227a2a81f0a250a92c41ce2cdd132b92bdf0a0b28f3"} err="failed to get container status \"7e53b343a89499dbddb33227a2a81f0a250a92c41ce2cdd132b92bdf0a0b28f3\": rpc error: code = NotFound desc = could not find container \"7e53b343a89499dbddb33227a2a81f0a250a92c41ce2cdd132b92bdf0a0b28f3\": container with ID starting with 7e53b343a89499dbddb33227a2a81f0a250a92c41ce2cdd132b92bdf0a0b28f3 not found: ID does not exist" Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.979134 4835 scope.go:117] "RemoveContainer" containerID="e964023f191d3cd2d6c30226d797a8cefe2f9190875071562187d4f3cd0d5aac" Feb 16 15:59:50 crc kubenswrapper[4835]: E0216 15:59:50.979281 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e964023f191d3cd2d6c30226d797a8cefe2f9190875071562187d4f3cd0d5aac\": container with ID starting with e964023f191d3cd2d6c30226d797a8cefe2f9190875071562187d4f3cd0d5aac not found: ID does not exist" containerID="e964023f191d3cd2d6c30226d797a8cefe2f9190875071562187d4f3cd0d5aac" Feb 16 15:59:50 crc kubenswrapper[4835]: I0216 15:59:50.979297 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e964023f191d3cd2d6c30226d797a8cefe2f9190875071562187d4f3cd0d5aac"} err="failed to get container status \"e964023f191d3cd2d6c30226d797a8cefe2f9190875071562187d4f3cd0d5aac\": rpc error: code = NotFound desc = could not find container \"e964023f191d3cd2d6c30226d797a8cefe2f9190875071562187d4f3cd0d5aac\": container with ID starting with e964023f191d3cd2d6c30226d797a8cefe2f9190875071562187d4f3cd0d5aac not found: ID does not exist" Feb 16 15:59:51 crc kubenswrapper[4835]: I0216 15:59:51.389109 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 15:59:51 crc kubenswrapper[4835]: E0216 15:59:51.389678 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 15:59:51 crc kubenswrapper[4835]: E0216 15:59:51.390809 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 15:59:51 crc kubenswrapper[4835]: I0216 15:59:51.391749 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="205228da-fe44-48b2-b1c5-38bd40c81557" path="/var/lib/kubelet/pods/205228da-fe44-48b2-b1c5-38bd40c81557/volumes" Feb 16 15:59:58 crc kubenswrapper[4835]: I0216 15:59:58.818951 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f58fb"] Feb 16 15:59:58 crc kubenswrapper[4835]: E0216 15:59:58.819908 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205228da-fe44-48b2-b1c5-38bd40c81557" containerName="registry-server" Feb 16 15:59:58 crc kubenswrapper[4835]: I0216 15:59:58.819922 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="205228da-fe44-48b2-b1c5-38bd40c81557" containerName="registry-server" Feb 16 15:59:58 crc kubenswrapper[4835]: E0216 15:59:58.819932 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205228da-fe44-48b2-b1c5-38bd40c81557" containerName="extract-utilities" Feb 16 15:59:58 crc kubenswrapper[4835]: I0216 15:59:58.819939 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="205228da-fe44-48b2-b1c5-38bd40c81557" containerName="extract-utilities" Feb 16 15:59:58 crc kubenswrapper[4835]: E0216 15:59:58.819956 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205228da-fe44-48b2-b1c5-38bd40c81557" containerName="extract-content" Feb 16 15:59:58 crc kubenswrapper[4835]: I0216 15:59:58.819963 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="205228da-fe44-48b2-b1c5-38bd40c81557" containerName="extract-content" Feb 16 15:59:58 crc kubenswrapper[4835]: I0216 15:59:58.820183 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="205228da-fe44-48b2-b1c5-38bd40c81557" containerName="registry-server" Feb 16 15:59:58 crc kubenswrapper[4835]: I0216 15:59:58.821923 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f58fb" Feb 16 15:59:58 crc kubenswrapper[4835]: I0216 15:59:58.840932 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f58fb"] Feb 16 15:59:58 crc kubenswrapper[4835]: I0216 15:59:58.896747 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gx28\" (UniqueName: \"kubernetes.io/projected/da2f6437-3a16-4bb1-8d74-a267cd8a310b-kube-api-access-8gx28\") pod \"community-operators-f58fb\" (UID: \"da2f6437-3a16-4bb1-8d74-a267cd8a310b\") " pod="openshift-marketplace/community-operators-f58fb" Feb 16 15:59:58 crc kubenswrapper[4835]: I0216 15:59:58.896821 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2f6437-3a16-4bb1-8d74-a267cd8a310b-catalog-content\") pod \"community-operators-f58fb\" (UID: \"da2f6437-3a16-4bb1-8d74-a267cd8a310b\") " pod="openshift-marketplace/community-operators-f58fb" Feb 16 15:59:58 crc kubenswrapper[4835]: I0216 15:59:58.896984 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2f6437-3a16-4bb1-8d74-a267cd8a310b-utilities\") pod \"community-operators-f58fb\" (UID: \"da2f6437-3a16-4bb1-8d74-a267cd8a310b\") " pod="openshift-marketplace/community-operators-f58fb" Feb 16 15:59:58 crc kubenswrapper[4835]: I0216 15:59:58.998311 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gx28\" (UniqueName: \"kubernetes.io/projected/da2f6437-3a16-4bb1-8d74-a267cd8a310b-kube-api-access-8gx28\") pod \"community-operators-f58fb\" (UID: \"da2f6437-3a16-4bb1-8d74-a267cd8a310b\") " pod="openshift-marketplace/community-operators-f58fb" Feb 16 15:59:58 crc kubenswrapper[4835]: I0216 15:59:58.998362 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2f6437-3a16-4bb1-8d74-a267cd8a310b-catalog-content\") pod \"community-operators-f58fb\" (UID: \"da2f6437-3a16-4bb1-8d74-a267cd8a310b\") " pod="openshift-marketplace/community-operators-f58fb" Feb 16 15:59:58 crc kubenswrapper[4835]: I0216 15:59:58.998489 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2f6437-3a16-4bb1-8d74-a267cd8a310b-utilities\") pod \"community-operators-f58fb\" (UID: \"da2f6437-3a16-4bb1-8d74-a267cd8a310b\") " pod="openshift-marketplace/community-operators-f58fb" Feb 16 15:59:58 crc kubenswrapper[4835]: I0216 15:59:58.998908 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2f6437-3a16-4bb1-8d74-a267cd8a310b-utilities\") pod \"community-operators-f58fb\" (UID: \"da2f6437-3a16-4bb1-8d74-a267cd8a310b\") " pod="openshift-marketplace/community-operators-f58fb" Feb 16 15:59:58 crc kubenswrapper[4835]: I0216 15:59:58.998957 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2f6437-3a16-4bb1-8d74-a267cd8a310b-catalog-content\") pod \"community-operators-f58fb\" (UID: \"da2f6437-3a16-4bb1-8d74-a267cd8a310b\") " pod="openshift-marketplace/community-operators-f58fb" Feb 16 15:59:59 crc kubenswrapper[4835]: I0216 15:59:59.017760 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gx28\" (UniqueName: \"kubernetes.io/projected/da2f6437-3a16-4bb1-8d74-a267cd8a310b-kube-api-access-8gx28\") pod \"community-operators-f58fb\" (UID: \"da2f6437-3a16-4bb1-8d74-a267cd8a310b\") " pod="openshift-marketplace/community-operators-f58fb" Feb 16 15:59:59 crc kubenswrapper[4835]: I0216 15:59:59.146772 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f58fb" Feb 16 15:59:59 crc kubenswrapper[4835]: I0216 15:59:59.679431 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f58fb"] Feb 16 15:59:59 crc kubenswrapper[4835]: I0216 15:59:59.924280 4835 generic.go:334] "Generic (PLEG): container finished" podID="da2f6437-3a16-4bb1-8d74-a267cd8a310b" containerID="339e7b2dd39762abc406f99e6190e72718ac513936218a7495bf2858afad1d02" exitCode=0 Feb 16 15:59:59 crc kubenswrapper[4835]: I0216 15:59:59.924319 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f58fb" event={"ID":"da2f6437-3a16-4bb1-8d74-a267cd8a310b","Type":"ContainerDied","Data":"339e7b2dd39762abc406f99e6190e72718ac513936218a7495bf2858afad1d02"} Feb 16 15:59:59 crc kubenswrapper[4835]: I0216 15:59:59.924625 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f58fb" event={"ID":"da2f6437-3a16-4bb1-8d74-a267cd8a310b","Type":"ContainerStarted","Data":"ad0c9dbd524b8bd4e66c5b6778b6228301853d6558905413c53ad9ecf4902a74"} Feb 16 16:00:00 crc kubenswrapper[4835]: I0216 16:00:00.157761 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227"] Feb 16 16:00:00 crc kubenswrapper[4835]: I0216 16:00:00.161039 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227" Feb 16 16:00:00 crc kubenswrapper[4835]: I0216 16:00:00.163880 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 16:00:00 crc kubenswrapper[4835]: I0216 16:00:00.164317 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 16:00:00 crc kubenswrapper[4835]: I0216 16:00:00.168428 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227"] Feb 16 16:00:00 crc kubenswrapper[4835]: I0216 16:00:00.326636 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g664\" (UniqueName: \"kubernetes.io/projected/7183d2c6-a766-458e-86b8-c395851e51f1-kube-api-access-2g664\") pod \"collect-profiles-29520960-qp227\" (UID: \"7183d2c6-a766-458e-86b8-c395851e51f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227" Feb 16 16:00:00 crc kubenswrapper[4835]: I0216 16:00:00.326902 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7183d2c6-a766-458e-86b8-c395851e51f1-secret-volume\") pod \"collect-profiles-29520960-qp227\" (UID: \"7183d2c6-a766-458e-86b8-c395851e51f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227" Feb 16 16:00:00 crc kubenswrapper[4835]: I0216 16:00:00.327210 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7183d2c6-a766-458e-86b8-c395851e51f1-config-volume\") pod \"collect-profiles-29520960-qp227\" (UID: \"7183d2c6-a766-458e-86b8-c395851e51f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227" Feb 16 16:00:00 crc kubenswrapper[4835]: I0216 16:00:00.429422 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g664\" (UniqueName: \"kubernetes.io/projected/7183d2c6-a766-458e-86b8-c395851e51f1-kube-api-access-2g664\") pod \"collect-profiles-29520960-qp227\" (UID: \"7183d2c6-a766-458e-86b8-c395851e51f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227" Feb 16 16:00:00 crc kubenswrapper[4835]: I0216 16:00:00.429628 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7183d2c6-a766-458e-86b8-c395851e51f1-secret-volume\") pod \"collect-profiles-29520960-qp227\" (UID: \"7183d2c6-a766-458e-86b8-c395851e51f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227" Feb 16 16:00:00 crc kubenswrapper[4835]: I0216 16:00:00.429744 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7183d2c6-a766-458e-86b8-c395851e51f1-config-volume\") pod \"collect-profiles-29520960-qp227\" (UID: \"7183d2c6-a766-458e-86b8-c395851e51f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227" Feb 16 16:00:00 crc kubenswrapper[4835]: I0216 16:00:00.430654 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7183d2c6-a766-458e-86b8-c395851e51f1-config-volume\") pod \"collect-profiles-29520960-qp227\" (UID: \"7183d2c6-a766-458e-86b8-c395851e51f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227" Feb 16 16:00:00 crc kubenswrapper[4835]: I0216 16:00:00.443045 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7183d2c6-a766-458e-86b8-c395851e51f1-secret-volume\") pod \"collect-profiles-29520960-qp227\" (UID: \"7183d2c6-a766-458e-86b8-c395851e51f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227" Feb 16 16:00:00 crc kubenswrapper[4835]: I0216 16:00:00.446740 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g664\" (UniqueName: \"kubernetes.io/projected/7183d2c6-a766-458e-86b8-c395851e51f1-kube-api-access-2g664\") pod \"collect-profiles-29520960-qp227\" (UID: \"7183d2c6-a766-458e-86b8-c395851e51f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227" Feb 16 16:00:00 crc kubenswrapper[4835]: I0216 16:00:00.482336 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227" Feb 16 16:00:01 crc kubenswrapper[4835]: I0216 16:00:01.011300 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227"] Feb 16 16:00:01 crc kubenswrapper[4835]: I0216 16:00:01.981262 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f58fb" event={"ID":"da2f6437-3a16-4bb1-8d74-a267cd8a310b","Type":"ContainerStarted","Data":"21ea62807517bbda4d1201ddd178d761e5fe8be46a87dd798b9b653fae027434"} Feb 16 16:00:01 crc kubenswrapper[4835]: I0216 16:00:01.983316 4835 generic.go:334] "Generic (PLEG): container finished" podID="7183d2c6-a766-458e-86b8-c395851e51f1" containerID="64f90ec4a86c3779411cbc15b4fec996c2d82e04c7dfd34cad0f25709fce22d4" exitCode=0 Feb 16 16:00:01 crc kubenswrapper[4835]: I0216 16:00:01.983355 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227" event={"ID":"7183d2c6-a766-458e-86b8-c395851e51f1","Type":"ContainerDied","Data":"64f90ec4a86c3779411cbc15b4fec996c2d82e04c7dfd34cad0f25709fce22d4"} Feb 16 16:00:01 crc kubenswrapper[4835]: I0216 16:00:01.983381 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227" event={"ID":"7183d2c6-a766-458e-86b8-c395851e51f1","Type":"ContainerStarted","Data":"f0fe38b6560504acd7e63c31c2bd075cf2307880d6380789bcfa8800288cb6d7"} Feb 16 16:00:02 crc kubenswrapper[4835]: I0216 16:00:02.993410 4835 generic.go:334] "Generic (PLEG): container finished" podID="da2f6437-3a16-4bb1-8d74-a267cd8a310b" containerID="21ea62807517bbda4d1201ddd178d761e5fe8be46a87dd798b9b653fae027434" exitCode=0 Feb 16 16:00:02 crc kubenswrapper[4835]: I0216 16:00:02.993474 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f58fb" event={"ID":"da2f6437-3a16-4bb1-8d74-a267cd8a310b","Type":"ContainerDied","Data":"21ea62807517bbda4d1201ddd178d761e5fe8be46a87dd798b9b653fae027434"} Feb 16 16:00:03 crc kubenswrapper[4835]: I0216 16:00:03.389945 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227" Feb 16 16:00:03 crc kubenswrapper[4835]: I0216 16:00:03.500239 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7183d2c6-a766-458e-86b8-c395851e51f1-secret-volume\") pod \"7183d2c6-a766-458e-86b8-c395851e51f1\" (UID: \"7183d2c6-a766-458e-86b8-c395851e51f1\") " Feb 16 16:00:03 crc kubenswrapper[4835]: I0216 16:00:03.500475 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7183d2c6-a766-458e-86b8-c395851e51f1-config-volume\") pod \"7183d2c6-a766-458e-86b8-c395851e51f1\" (UID: \"7183d2c6-a766-458e-86b8-c395851e51f1\") " Feb 16 16:00:03 crc kubenswrapper[4835]: I0216 16:00:03.500684 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g664\" (UniqueName: \"kubernetes.io/projected/7183d2c6-a766-458e-86b8-c395851e51f1-kube-api-access-2g664\") pod \"7183d2c6-a766-458e-86b8-c395851e51f1\" (UID: \"7183d2c6-a766-458e-86b8-c395851e51f1\") " Feb 16 16:00:03 crc kubenswrapper[4835]: I0216 16:00:03.502783 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7183d2c6-a766-458e-86b8-c395851e51f1-config-volume" (OuterVolumeSpecName: "config-volume") pod "7183d2c6-a766-458e-86b8-c395851e51f1" (UID: "7183d2c6-a766-458e-86b8-c395851e51f1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:00:03 crc kubenswrapper[4835]: I0216 16:00:03.508663 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7183d2c6-a766-458e-86b8-c395851e51f1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7183d2c6-a766-458e-86b8-c395851e51f1" (UID: "7183d2c6-a766-458e-86b8-c395851e51f1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:00:03 crc kubenswrapper[4835]: I0216 16:00:03.518695 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7183d2c6-a766-458e-86b8-c395851e51f1-kube-api-access-2g664" (OuterVolumeSpecName: "kube-api-access-2g664") pod "7183d2c6-a766-458e-86b8-c395851e51f1" (UID: "7183d2c6-a766-458e-86b8-c395851e51f1"). InnerVolumeSpecName "kube-api-access-2g664". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:00:03 crc kubenswrapper[4835]: I0216 16:00:03.603143 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g664\" (UniqueName: \"kubernetes.io/projected/7183d2c6-a766-458e-86b8-c395851e51f1-kube-api-access-2g664\") on node \"crc\" DevicePath \"\"" Feb 16 16:00:03 crc kubenswrapper[4835]: I0216 16:00:03.603173 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7183d2c6-a766-458e-86b8-c395851e51f1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 16:00:03 crc kubenswrapper[4835]: I0216 16:00:03.603184 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7183d2c6-a766-458e-86b8-c395851e51f1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 16:00:04 crc kubenswrapper[4835]: I0216 16:00:04.003084 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f58fb" event={"ID":"da2f6437-3a16-4bb1-8d74-a267cd8a310b","Type":"ContainerStarted","Data":"2243800ec7d90061cc57381cd963df0a1b6bcb900af19d8ee19432aa9d70fd4b"} Feb 16 16:00:04 crc kubenswrapper[4835]: I0216 16:00:04.006862 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227" event={"ID":"7183d2c6-a766-458e-86b8-c395851e51f1","Type":"ContainerDied","Data":"f0fe38b6560504acd7e63c31c2bd075cf2307880d6380789bcfa8800288cb6d7"} Feb 16 16:00:04 crc kubenswrapper[4835]: I0216 16:00:04.006900 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0fe38b6560504acd7e63c31c2bd075cf2307880d6380789bcfa8800288cb6d7" Feb 16 16:00:04 crc kubenswrapper[4835]: I0216 16:00:04.006967 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520960-qp227" Feb 16 16:00:04 crc kubenswrapper[4835]: I0216 16:00:04.041105 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f58fb" podStartSLOduration=2.45414348 podStartE2EDuration="6.041083831s" podCreationTimestamp="2026-02-16 15:59:58 +0000 UTC" firstStartedPulling="2026-02-16 15:59:59.926433387 +0000 UTC m=+3149.218426282" lastFinishedPulling="2026-02-16 16:00:03.513373738 +0000 UTC m=+3152.805366633" observedRunningTime="2026-02-16 16:00:04.035592888 +0000 UTC m=+3153.327585803" watchObservedRunningTime="2026-02-16 16:00:04.041083831 +0000 UTC m=+3153.333076726" Feb 16 16:00:04 crc kubenswrapper[4835]: I0216 16:00:04.487914 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22"] Feb 16 16:00:04 crc kubenswrapper[4835]: I0216 16:00:04.497012 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-gtd22"] Feb 16 16:00:05 crc kubenswrapper[4835]: I0216 16:00:05.390697 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79abfe47-062a-40f9-a2de-61850e9711d7" path="/var/lib/kubelet/pods/79abfe47-062a-40f9-a2de-61850e9711d7/volumes" Feb 16 16:00:06 crc kubenswrapper[4835]: I0216 16:00:06.380835 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 16:00:06 crc kubenswrapper[4835]: E0216 16:00:06.381246 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 16:00:06 crc kubenswrapper[4835]: E0216 16:00:06.381698 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 16:00:09 crc kubenswrapper[4835]: I0216 16:00:09.147563 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f58fb" Feb 16 16:00:09 crc kubenswrapper[4835]: I0216 16:00:09.147983 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f58fb" Feb 16 16:00:09 crc kubenswrapper[4835]: I0216 16:00:09.189469 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f58fb" Feb 16 16:00:10 crc kubenswrapper[4835]: I0216 16:00:10.129584 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f58fb" Feb 16 16:00:10 crc kubenswrapper[4835]: I0216 16:00:10.183521 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f58fb"] Feb 16 16:00:12 crc kubenswrapper[4835]: I0216 16:00:12.098626 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f58fb" podUID="da2f6437-3a16-4bb1-8d74-a267cd8a310b" containerName="registry-server" containerID="cri-o://2243800ec7d90061cc57381cd963df0a1b6bcb900af19d8ee19432aa9d70fd4b" gracePeriod=2 Feb 16 16:00:12 crc kubenswrapper[4835]: I0216 16:00:12.564493 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f58fb" Feb 16 16:00:12 crc kubenswrapper[4835]: I0216 16:00:12.694447 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gx28\" (UniqueName: \"kubernetes.io/projected/da2f6437-3a16-4bb1-8d74-a267cd8a310b-kube-api-access-8gx28\") pod \"da2f6437-3a16-4bb1-8d74-a267cd8a310b\" (UID: \"da2f6437-3a16-4bb1-8d74-a267cd8a310b\") " Feb 16 16:00:12 crc kubenswrapper[4835]: I0216 16:00:12.694594 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2f6437-3a16-4bb1-8d74-a267cd8a310b-catalog-content\") pod \"da2f6437-3a16-4bb1-8d74-a267cd8a310b\" (UID: \"da2f6437-3a16-4bb1-8d74-a267cd8a310b\") " Feb 16 16:00:12 crc kubenswrapper[4835]: I0216 16:00:12.694829 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2f6437-3a16-4bb1-8d74-a267cd8a310b-utilities\") pod \"da2f6437-3a16-4bb1-8d74-a267cd8a310b\" (UID: \"da2f6437-3a16-4bb1-8d74-a267cd8a310b\") " Feb 16 16:00:12 crc kubenswrapper[4835]: I0216 16:00:12.695933 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2f6437-3a16-4bb1-8d74-a267cd8a310b-utilities" (OuterVolumeSpecName: "utilities") pod "da2f6437-3a16-4bb1-8d74-a267cd8a310b" (UID: "da2f6437-3a16-4bb1-8d74-a267cd8a310b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:00:12 crc kubenswrapper[4835]: I0216 16:00:12.701418 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2f6437-3a16-4bb1-8d74-a267cd8a310b-kube-api-access-8gx28" (OuterVolumeSpecName: "kube-api-access-8gx28") pod "da2f6437-3a16-4bb1-8d74-a267cd8a310b" (UID: "da2f6437-3a16-4bb1-8d74-a267cd8a310b"). InnerVolumeSpecName "kube-api-access-8gx28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:00:12 crc kubenswrapper[4835]: I0216 16:00:12.755696 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2f6437-3a16-4bb1-8d74-a267cd8a310b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da2f6437-3a16-4bb1-8d74-a267cd8a310b" (UID: "da2f6437-3a16-4bb1-8d74-a267cd8a310b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:00:12 crc kubenswrapper[4835]: I0216 16:00:12.797765 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da2f6437-3a16-4bb1-8d74-a267cd8a310b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:00:12 crc kubenswrapper[4835]: I0216 16:00:12.797822 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gx28\" (UniqueName: \"kubernetes.io/projected/da2f6437-3a16-4bb1-8d74-a267cd8a310b-kube-api-access-8gx28\") on node \"crc\" DevicePath \"\"" Feb 16 16:00:12 crc kubenswrapper[4835]: I0216 16:00:12.797840 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da2f6437-3a16-4bb1-8d74-a267cd8a310b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:00:13 crc kubenswrapper[4835]: I0216 16:00:13.112187 4835 generic.go:334] "Generic (PLEG): container finished" podID="da2f6437-3a16-4bb1-8d74-a267cd8a310b" containerID="2243800ec7d90061cc57381cd963df0a1b6bcb900af19d8ee19432aa9d70fd4b" exitCode=0 Feb 16 16:00:13 crc kubenswrapper[4835]: I0216 16:00:13.112230 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f58fb" event={"ID":"da2f6437-3a16-4bb1-8d74-a267cd8a310b","Type":"ContainerDied","Data":"2243800ec7d90061cc57381cd963df0a1b6bcb900af19d8ee19432aa9d70fd4b"} Feb 16 16:00:13 crc kubenswrapper[4835]: I0216 16:00:13.112242 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f58fb" Feb 16 16:00:13 crc kubenswrapper[4835]: I0216 16:00:13.112262 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f58fb" event={"ID":"da2f6437-3a16-4bb1-8d74-a267cd8a310b","Type":"ContainerDied","Data":"ad0c9dbd524b8bd4e66c5b6778b6228301853d6558905413c53ad9ecf4902a74"} Feb 16 16:00:13 crc kubenswrapper[4835]: I0216 16:00:13.112281 4835 scope.go:117] "RemoveContainer" containerID="2243800ec7d90061cc57381cd963df0a1b6bcb900af19d8ee19432aa9d70fd4b" Feb 16 16:00:13 crc kubenswrapper[4835]: I0216 16:00:13.135197 4835 scope.go:117] "RemoveContainer" containerID="21ea62807517bbda4d1201ddd178d761e5fe8be46a87dd798b9b653fae027434" Feb 16 16:00:13 crc kubenswrapper[4835]: I0216 16:00:13.150946 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f58fb"] Feb 16 16:00:13 crc kubenswrapper[4835]: I0216 16:00:13.158848 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f58fb"] Feb 16 16:00:13 crc kubenswrapper[4835]: I0216 16:00:13.163105 4835 scope.go:117] "RemoveContainer" containerID="339e7b2dd39762abc406f99e6190e72718ac513936218a7495bf2858afad1d02" Feb 16 16:00:13 crc kubenswrapper[4835]: I0216 16:00:13.226945 4835 scope.go:117] "RemoveContainer" containerID="2243800ec7d90061cc57381cd963df0a1b6bcb900af19d8ee19432aa9d70fd4b" Feb 16 16:00:13 crc kubenswrapper[4835]: E0216 16:00:13.227915 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2243800ec7d90061cc57381cd963df0a1b6bcb900af19d8ee19432aa9d70fd4b\": container with ID starting with 2243800ec7d90061cc57381cd963df0a1b6bcb900af19d8ee19432aa9d70fd4b not found: ID does not exist" containerID="2243800ec7d90061cc57381cd963df0a1b6bcb900af19d8ee19432aa9d70fd4b" Feb 16 16:00:13 crc kubenswrapper[4835]: I0216 16:00:13.227945 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2243800ec7d90061cc57381cd963df0a1b6bcb900af19d8ee19432aa9d70fd4b"} err="failed to get container status \"2243800ec7d90061cc57381cd963df0a1b6bcb900af19d8ee19432aa9d70fd4b\": rpc error: code = NotFound desc = could not find container \"2243800ec7d90061cc57381cd963df0a1b6bcb900af19d8ee19432aa9d70fd4b\": container with ID starting with 2243800ec7d90061cc57381cd963df0a1b6bcb900af19d8ee19432aa9d70fd4b not found: ID does not exist" Feb 16 16:00:13 crc kubenswrapper[4835]: I0216 16:00:13.227979 4835 scope.go:117] "RemoveContainer" containerID="21ea62807517bbda4d1201ddd178d761e5fe8be46a87dd798b9b653fae027434" Feb 16 16:00:13 crc kubenswrapper[4835]: E0216 16:00:13.228331 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21ea62807517bbda4d1201ddd178d761e5fe8be46a87dd798b9b653fae027434\": container with ID starting with 21ea62807517bbda4d1201ddd178d761e5fe8be46a87dd798b9b653fae027434 not found: ID does not exist" containerID="21ea62807517bbda4d1201ddd178d761e5fe8be46a87dd798b9b653fae027434" Feb 16 16:00:13 crc kubenswrapper[4835]: I0216 16:00:13.228630 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ea62807517bbda4d1201ddd178d761e5fe8be46a87dd798b9b653fae027434"} err="failed to get container status \"21ea62807517bbda4d1201ddd178d761e5fe8be46a87dd798b9b653fae027434\": rpc error: code = NotFound desc = could not find container \"21ea62807517bbda4d1201ddd178d761e5fe8be46a87dd798b9b653fae027434\": container with ID starting with 21ea62807517bbda4d1201ddd178d761e5fe8be46a87dd798b9b653fae027434 not found: ID does not exist" Feb 16 16:00:13 crc kubenswrapper[4835]: I0216 16:00:13.228652 4835 scope.go:117] "RemoveContainer" containerID="339e7b2dd39762abc406f99e6190e72718ac513936218a7495bf2858afad1d02" Feb 16 16:00:13 crc kubenswrapper[4835]: E0216 16:00:13.228965 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"339e7b2dd39762abc406f99e6190e72718ac513936218a7495bf2858afad1d02\": container with ID starting with 339e7b2dd39762abc406f99e6190e72718ac513936218a7495bf2858afad1d02 not found: ID does not exist" containerID="339e7b2dd39762abc406f99e6190e72718ac513936218a7495bf2858afad1d02" Feb 16 16:00:13 crc kubenswrapper[4835]: I0216 16:00:13.228989 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"339e7b2dd39762abc406f99e6190e72718ac513936218a7495bf2858afad1d02"} err="failed to get container status \"339e7b2dd39762abc406f99e6190e72718ac513936218a7495bf2858afad1d02\": rpc error: code = NotFound desc = could not find container \"339e7b2dd39762abc406f99e6190e72718ac513936218a7495bf2858afad1d02\": container with ID starting with 339e7b2dd39762abc406f99e6190e72718ac513936218a7495bf2858afad1d02 not found: ID does not exist" Feb 16 16:00:13 crc kubenswrapper[4835]: I0216 16:00:13.398199 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da2f6437-3a16-4bb1-8d74-a267cd8a310b" path="/var/lib/kubelet/pods/da2f6437-3a16-4bb1-8d74-a267cd8a310b/volumes" Feb 16 16:00:18 crc kubenswrapper[4835]: I0216 16:00:18.379808 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 16:00:18 crc kubenswrapper[4835]: E0216 16:00:18.382037 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 16:00:19 crc kubenswrapper[4835]: E0216 16:00:19.381412 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 16:00:25 crc kubenswrapper[4835]: I0216 16:00:25.224164 4835 generic.go:334] "Generic (PLEG): container finished" podID="173d5357-97c2-4bd8-822d-7fd2645c30fb" containerID="bdb9a21ded27832b593fac38d8541fbfdb7012cba743560d5eb748ae0d17a0ed" exitCode=0 Feb 16 16:00:25 crc kubenswrapper[4835]: I0216 16:00:25.224547 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5m7r9/must-gather-xsr78" event={"ID":"173d5357-97c2-4bd8-822d-7fd2645c30fb","Type":"ContainerDied","Data":"bdb9a21ded27832b593fac38d8541fbfdb7012cba743560d5eb748ae0d17a0ed"} Feb 16 16:00:25 crc kubenswrapper[4835]: I0216 16:00:25.225242 4835 scope.go:117] "RemoveContainer" containerID="bdb9a21ded27832b593fac38d8541fbfdb7012cba743560d5eb748ae0d17a0ed" Feb 16 16:00:26 crc kubenswrapper[4835]: I0216 16:00:26.121715 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5m7r9_must-gather-xsr78_173d5357-97c2-4bd8-822d-7fd2645c30fb/gather/0.log" Feb 16 16:00:31 crc kubenswrapper[4835]: E0216 16:00:31.385452 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 16:00:32 crc kubenswrapper[4835]: I0216 16:00:32.379374 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 16:00:32 crc kubenswrapper[4835]: E0216 16:00:32.379979 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 16:00:33 crc kubenswrapper[4835]: I0216 16:00:33.799607 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5m7r9/must-gather-xsr78"] Feb 16 16:00:33 crc kubenswrapper[4835]: I0216 16:00:33.800834 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5m7r9/must-gather-xsr78" podUID="173d5357-97c2-4bd8-822d-7fd2645c30fb" containerName="copy" containerID="cri-o://1bc8a64c5a5d6fa9a5f37ec839330739c92d1d2d13d6010253576ae208803cfc" gracePeriod=2 Feb 16 16:00:33 crc kubenswrapper[4835]: I0216 16:00:33.807205 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5m7r9/must-gather-xsr78"] Feb 16 16:00:34 crc kubenswrapper[4835]: I0216 16:00:34.281230 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5m7r9_must-gather-xsr78_173d5357-97c2-4bd8-822d-7fd2645c30fb/copy/0.log" Feb 16 16:00:34 crc kubenswrapper[4835]: I0216 16:00:34.282173 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5m7r9/must-gather-xsr78" Feb 16 16:00:34 crc kubenswrapper[4835]: I0216 16:00:34.321968 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5m7r9_must-gather-xsr78_173d5357-97c2-4bd8-822d-7fd2645c30fb/copy/0.log" Feb 16 16:00:34 crc kubenswrapper[4835]: I0216 16:00:34.322483 4835 generic.go:334] "Generic (PLEG): container finished" podID="173d5357-97c2-4bd8-822d-7fd2645c30fb" containerID="1bc8a64c5a5d6fa9a5f37ec839330739c92d1d2d13d6010253576ae208803cfc" exitCode=143 Feb 16 16:00:34 crc kubenswrapper[4835]: I0216 16:00:34.322602 4835 scope.go:117] "RemoveContainer" containerID="1bc8a64c5a5d6fa9a5f37ec839330739c92d1d2d13d6010253576ae208803cfc" Feb 16 16:00:34 crc kubenswrapper[4835]: I0216 16:00:34.322759 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5m7r9/must-gather-xsr78" Feb 16 16:00:34 crc kubenswrapper[4835]: I0216 16:00:34.323321 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtrdg\" (UniqueName: \"kubernetes.io/projected/173d5357-97c2-4bd8-822d-7fd2645c30fb-kube-api-access-jtrdg\") pod \"173d5357-97c2-4bd8-822d-7fd2645c30fb\" (UID: \"173d5357-97c2-4bd8-822d-7fd2645c30fb\") " Feb 16 16:00:34 crc kubenswrapper[4835]: I0216 16:00:34.323613 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/173d5357-97c2-4bd8-822d-7fd2645c30fb-must-gather-output\") pod \"173d5357-97c2-4bd8-822d-7fd2645c30fb\" (UID: \"173d5357-97c2-4bd8-822d-7fd2645c30fb\") " Feb 16 16:00:34 crc kubenswrapper[4835]: I0216 16:00:34.329821 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/173d5357-97c2-4bd8-822d-7fd2645c30fb-kube-api-access-jtrdg" (OuterVolumeSpecName: "kube-api-access-jtrdg") pod "173d5357-97c2-4bd8-822d-7fd2645c30fb" (UID: "173d5357-97c2-4bd8-822d-7fd2645c30fb"). InnerVolumeSpecName "kube-api-access-jtrdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:00:34 crc kubenswrapper[4835]: I0216 16:00:34.393208 4835 scope.go:117] "RemoveContainer" containerID="bdb9a21ded27832b593fac38d8541fbfdb7012cba743560d5eb748ae0d17a0ed" Feb 16 16:00:34 crc kubenswrapper[4835]: I0216 16:00:34.432642 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtrdg\" (UniqueName: \"kubernetes.io/projected/173d5357-97c2-4bd8-822d-7fd2645c30fb-kube-api-access-jtrdg\") on node \"crc\" DevicePath \"\"" Feb 16 16:00:34 crc kubenswrapper[4835]: I0216 16:00:34.448833 4835 scope.go:117] "RemoveContainer" containerID="1bc8a64c5a5d6fa9a5f37ec839330739c92d1d2d13d6010253576ae208803cfc" Feb 16 16:00:34 crc kubenswrapper[4835]: E0216 16:00:34.449344 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bc8a64c5a5d6fa9a5f37ec839330739c92d1d2d13d6010253576ae208803cfc\": container with ID starting with 1bc8a64c5a5d6fa9a5f37ec839330739c92d1d2d13d6010253576ae208803cfc not found: ID does not exist" containerID="1bc8a64c5a5d6fa9a5f37ec839330739c92d1d2d13d6010253576ae208803cfc" Feb 16 16:00:34 crc kubenswrapper[4835]: I0216 16:00:34.449374 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bc8a64c5a5d6fa9a5f37ec839330739c92d1d2d13d6010253576ae208803cfc"} err="failed to get container status \"1bc8a64c5a5d6fa9a5f37ec839330739c92d1d2d13d6010253576ae208803cfc\": rpc error: code = NotFound desc = could not find container \"1bc8a64c5a5d6fa9a5f37ec839330739c92d1d2d13d6010253576ae208803cfc\": container with ID starting with 1bc8a64c5a5d6fa9a5f37ec839330739c92d1d2d13d6010253576ae208803cfc not found: ID does not exist" Feb 16 16:00:34 crc kubenswrapper[4835]: I0216 16:00:34.449395 4835 scope.go:117] "RemoveContainer" containerID="bdb9a21ded27832b593fac38d8541fbfdb7012cba743560d5eb748ae0d17a0ed" Feb 16 16:00:34 crc kubenswrapper[4835]: E0216 16:00:34.454804 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdb9a21ded27832b593fac38d8541fbfdb7012cba743560d5eb748ae0d17a0ed\": container with ID starting with bdb9a21ded27832b593fac38d8541fbfdb7012cba743560d5eb748ae0d17a0ed not found: ID does not exist" containerID="bdb9a21ded27832b593fac38d8541fbfdb7012cba743560d5eb748ae0d17a0ed" Feb 16 16:00:34 crc kubenswrapper[4835]: I0216 16:00:34.454853 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdb9a21ded27832b593fac38d8541fbfdb7012cba743560d5eb748ae0d17a0ed"} err="failed to get container status \"bdb9a21ded27832b593fac38d8541fbfdb7012cba743560d5eb748ae0d17a0ed\": rpc error: code = NotFound desc = could not find container \"bdb9a21ded27832b593fac38d8541fbfdb7012cba743560d5eb748ae0d17a0ed\": container with ID starting with bdb9a21ded27832b593fac38d8541fbfdb7012cba743560d5eb748ae0d17a0ed not found: ID does not exist" Feb 16 16:00:34 crc kubenswrapper[4835]: I0216 16:00:34.514401 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/173d5357-97c2-4bd8-822d-7fd2645c30fb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "173d5357-97c2-4bd8-822d-7fd2645c30fb" (UID: "173d5357-97c2-4bd8-822d-7fd2645c30fb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:00:34 crc kubenswrapper[4835]: I0216 16:00:34.538431 4835 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/173d5357-97c2-4bd8-822d-7fd2645c30fb-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 16:00:35 crc kubenswrapper[4835]: I0216 16:00:35.388597 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="173d5357-97c2-4bd8-822d-7fd2645c30fb" path="/var/lib/kubelet/pods/173d5357-97c2-4bd8-822d-7fd2645c30fb/volumes" Feb 16 16:00:36 crc kubenswrapper[4835]: I0216 16:00:36.935634 4835 scope.go:117] "RemoveContainer" containerID="16cd500f1d79347b86d347f75314b510623fc28deccfe416ca371aaca8f056e2" Feb 16 16:00:44 crc kubenswrapper[4835]: I0216 16:00:44.379426 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 16:00:44 crc kubenswrapper[4835]: E0216 16:00:44.380479 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 16:00:45 crc kubenswrapper[4835]: E0216 16:00:45.380401 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 16:00:57 crc kubenswrapper[4835]: I0216 16:00:57.379733 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 16:00:57 crc kubenswrapper[4835]: E0216 16:00:57.380694 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 16:00:59 crc kubenswrapper[4835]: E0216 16:00:59.381200 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.148387 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29520961-c98d7"] Feb 16 16:01:00 crc kubenswrapper[4835]: E0216 16:01:00.149142 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2f6437-3a16-4bb1-8d74-a267cd8a310b" containerName="extract-content" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.149164 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2f6437-3a16-4bb1-8d74-a267cd8a310b" containerName="extract-content" Feb 16 16:01:00 crc kubenswrapper[4835]: E0216 16:01:00.149178 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2f6437-3a16-4bb1-8d74-a267cd8a310b" containerName="registry-server" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.149215 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2f6437-3a16-4bb1-8d74-a267cd8a310b" containerName="registry-server" Feb 16 16:01:00 crc kubenswrapper[4835]: E0216 16:01:00.149250 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2f6437-3a16-4bb1-8d74-a267cd8a310b" containerName="extract-utilities" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.149258 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2f6437-3a16-4bb1-8d74-a267cd8a310b" containerName="extract-utilities" Feb 16 16:01:00 crc kubenswrapper[4835]: E0216 16:01:00.149275 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="173d5357-97c2-4bd8-822d-7fd2645c30fb" containerName="gather" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.149283 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="173d5357-97c2-4bd8-822d-7fd2645c30fb" containerName="gather" Feb 16 16:01:00 crc kubenswrapper[4835]: E0216 16:01:00.149293 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="173d5357-97c2-4bd8-822d-7fd2645c30fb" containerName="copy" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.149300 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="173d5357-97c2-4bd8-822d-7fd2645c30fb" containerName="copy" Feb 16 16:01:00 crc kubenswrapper[4835]: E0216 16:01:00.149310 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7183d2c6-a766-458e-86b8-c395851e51f1" containerName="collect-profiles" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.149318 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7183d2c6-a766-458e-86b8-c395851e51f1" containerName="collect-profiles" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.149552 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="173d5357-97c2-4bd8-822d-7fd2645c30fb" containerName="gather" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.149566 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2f6437-3a16-4bb1-8d74-a267cd8a310b" containerName="registry-server" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.149577 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="173d5357-97c2-4bd8-822d-7fd2645c30fb" containerName="copy" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.149609 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7183d2c6-a766-458e-86b8-c395851e51f1" containerName="collect-profiles" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.150596 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520961-c98d7" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.162682 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520961-c98d7"] Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.191111 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-fernet-keys\") pod \"keystone-cron-29520961-c98d7\" (UID: \"5a354380-89ce-4b44-936f-c92ea10cfd8c\") " pod="openstack/keystone-cron-29520961-c98d7" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.191180 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p86x\" (UniqueName: \"kubernetes.io/projected/5a354380-89ce-4b44-936f-c92ea10cfd8c-kube-api-access-8p86x\") pod \"keystone-cron-29520961-c98d7\" (UID: \"5a354380-89ce-4b44-936f-c92ea10cfd8c\") " pod="openstack/keystone-cron-29520961-c98d7" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.191230 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-config-data\") pod \"keystone-cron-29520961-c98d7\" (UID: \"5a354380-89ce-4b44-936f-c92ea10cfd8c\") " pod="openstack/keystone-cron-29520961-c98d7" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.191268 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-combined-ca-bundle\") pod \"keystone-cron-29520961-c98d7\" (UID: \"5a354380-89ce-4b44-936f-c92ea10cfd8c\") " pod="openstack/keystone-cron-29520961-c98d7" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.293707 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-fernet-keys\") pod \"keystone-cron-29520961-c98d7\" (UID: \"5a354380-89ce-4b44-936f-c92ea10cfd8c\") " pod="openstack/keystone-cron-29520961-c98d7" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.293774 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p86x\" (UniqueName: \"kubernetes.io/projected/5a354380-89ce-4b44-936f-c92ea10cfd8c-kube-api-access-8p86x\") pod \"keystone-cron-29520961-c98d7\" (UID: \"5a354380-89ce-4b44-936f-c92ea10cfd8c\") " pod="openstack/keystone-cron-29520961-c98d7" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.293815 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-config-data\") pod \"keystone-cron-29520961-c98d7\" (UID: \"5a354380-89ce-4b44-936f-c92ea10cfd8c\") " pod="openstack/keystone-cron-29520961-c98d7" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.293845 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-combined-ca-bundle\") pod \"keystone-cron-29520961-c98d7\" (UID: \"5a354380-89ce-4b44-936f-c92ea10cfd8c\") " pod="openstack/keystone-cron-29520961-c98d7" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.312833 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-combined-ca-bundle\") pod \"keystone-cron-29520961-c98d7\" (UID: \"5a354380-89ce-4b44-936f-c92ea10cfd8c\") " pod="openstack/keystone-cron-29520961-c98d7" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.312945 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-fernet-keys\") pod \"keystone-cron-29520961-c98d7\" (UID: \"5a354380-89ce-4b44-936f-c92ea10cfd8c\") " pod="openstack/keystone-cron-29520961-c98d7" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.312952 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-config-data\") pod \"keystone-cron-29520961-c98d7\" (UID: \"5a354380-89ce-4b44-936f-c92ea10cfd8c\") " pod="openstack/keystone-cron-29520961-c98d7" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.314733 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p86x\" (UniqueName: \"kubernetes.io/projected/5a354380-89ce-4b44-936f-c92ea10cfd8c-kube-api-access-8p86x\") pod \"keystone-cron-29520961-c98d7\" (UID: \"5a354380-89ce-4b44-936f-c92ea10cfd8c\") " pod="openstack/keystone-cron-29520961-c98d7" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.501428 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520961-c98d7" Feb 16 16:01:00 crc kubenswrapper[4835]: I0216 16:01:00.937941 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520961-c98d7"] Feb 16 16:01:01 crc kubenswrapper[4835]: I0216 16:01:01.558936 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520961-c98d7" event={"ID":"5a354380-89ce-4b44-936f-c92ea10cfd8c","Type":"ContainerStarted","Data":"764fb29d165fae96686c53a788f4e3eda74bf63741125414cc11b64a8e7a1edb"} Feb 16 16:01:01 crc kubenswrapper[4835]: I0216 16:01:01.559183 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520961-c98d7" event={"ID":"5a354380-89ce-4b44-936f-c92ea10cfd8c","Type":"ContainerStarted","Data":"c0180620b745009309dda9f53b05e22dc3ffe53b0ce2329968572fa7d8161441"} Feb 16 16:01:01 crc kubenswrapper[4835]: I0216 16:01:01.575549 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29520961-c98d7" podStartSLOduration=1.575516757 podStartE2EDuration="1.575516757s" podCreationTimestamp="2026-02-16 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 16:01:01.571146853 +0000 UTC m=+3210.863139738" watchObservedRunningTime="2026-02-16 16:01:01.575516757 +0000 UTC m=+3210.867509652" Feb 16 16:01:04 crc kubenswrapper[4835]: I0216 16:01:04.583197 4835 generic.go:334] "Generic (PLEG): container finished" podID="5a354380-89ce-4b44-936f-c92ea10cfd8c" containerID="764fb29d165fae96686c53a788f4e3eda74bf63741125414cc11b64a8e7a1edb" exitCode=0 Feb 16 16:01:04 crc kubenswrapper[4835]: I0216 16:01:04.583395 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520961-c98d7" event={"ID":"5a354380-89ce-4b44-936f-c92ea10cfd8c","Type":"ContainerDied","Data":"764fb29d165fae96686c53a788f4e3eda74bf63741125414cc11b64a8e7a1edb"} Feb 16 16:01:05 crc kubenswrapper[4835]: I0216 16:01:05.986589 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520961-c98d7" Feb 16 16:01:06 crc kubenswrapper[4835]: I0216 16:01:06.176138 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p86x\" (UniqueName: \"kubernetes.io/projected/5a354380-89ce-4b44-936f-c92ea10cfd8c-kube-api-access-8p86x\") pod \"5a354380-89ce-4b44-936f-c92ea10cfd8c\" (UID: \"5a354380-89ce-4b44-936f-c92ea10cfd8c\") " Feb 16 16:01:06 crc kubenswrapper[4835]: I0216 16:01:06.176198 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-config-data\") pod \"5a354380-89ce-4b44-936f-c92ea10cfd8c\" (UID: \"5a354380-89ce-4b44-936f-c92ea10cfd8c\") " Feb 16 16:01:06 crc kubenswrapper[4835]: I0216 16:01:06.176228 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-fernet-keys\") pod \"5a354380-89ce-4b44-936f-c92ea10cfd8c\" (UID: \"5a354380-89ce-4b44-936f-c92ea10cfd8c\") " Feb 16 16:01:06 crc kubenswrapper[4835]: I0216 16:01:06.176279 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-combined-ca-bundle\") pod \"5a354380-89ce-4b44-936f-c92ea10cfd8c\" (UID: \"5a354380-89ce-4b44-936f-c92ea10cfd8c\") " Feb 16 16:01:06 crc kubenswrapper[4835]: I0216 16:01:06.183794 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a354380-89ce-4b44-936f-c92ea10cfd8c-kube-api-access-8p86x" (OuterVolumeSpecName: "kube-api-access-8p86x") pod "5a354380-89ce-4b44-936f-c92ea10cfd8c" (UID: "5a354380-89ce-4b44-936f-c92ea10cfd8c"). InnerVolumeSpecName "kube-api-access-8p86x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:01:06 crc kubenswrapper[4835]: I0216 16:01:06.187741 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5a354380-89ce-4b44-936f-c92ea10cfd8c" (UID: "5a354380-89ce-4b44-936f-c92ea10cfd8c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:01:06 crc kubenswrapper[4835]: I0216 16:01:06.211653 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a354380-89ce-4b44-936f-c92ea10cfd8c" (UID: "5a354380-89ce-4b44-936f-c92ea10cfd8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:01:06 crc kubenswrapper[4835]: I0216 16:01:06.240222 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-config-data" (OuterVolumeSpecName: "config-data") pod "5a354380-89ce-4b44-936f-c92ea10cfd8c" (UID: "5a354380-89ce-4b44-936f-c92ea10cfd8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:01:06 crc kubenswrapper[4835]: I0216 16:01:06.277764 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p86x\" (UniqueName: \"kubernetes.io/projected/5a354380-89ce-4b44-936f-c92ea10cfd8c-kube-api-access-8p86x\") on node \"crc\" DevicePath \"\"" Feb 16 16:01:06 crc kubenswrapper[4835]: I0216 16:01:06.278040 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 16:01:06 crc kubenswrapper[4835]: I0216 16:01:06.278105 4835 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 16:01:06 crc kubenswrapper[4835]: I0216 16:01:06.278165 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a354380-89ce-4b44-936f-c92ea10cfd8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 16:01:06 crc kubenswrapper[4835]: I0216 16:01:06.603816 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520961-c98d7" event={"ID":"5a354380-89ce-4b44-936f-c92ea10cfd8c","Type":"ContainerDied","Data":"c0180620b745009309dda9f53b05e22dc3ffe53b0ce2329968572fa7d8161441"} Feb 16 16:01:06 crc kubenswrapper[4835]: I0216 16:01:06.604098 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0180620b745009309dda9f53b05e22dc3ffe53b0ce2329968572fa7d8161441" Feb 16 16:01:06 crc kubenswrapper[4835]: I0216 16:01:06.603881 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520961-c98d7" Feb 16 16:01:11 crc kubenswrapper[4835]: I0216 16:01:11.383779 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 16:01:11 crc kubenswrapper[4835]: E0216 16:01:11.384465 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nd4kl_openshift-machine-config-operator(d233f2c8-6963-48c1-889e-ef20f52ad5b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" Feb 16 16:01:13 crc kubenswrapper[4835]: E0216 16:01:13.382861 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 16:01:25 crc kubenswrapper[4835]: I0216 16:01:25.378542 4835 scope.go:117] "RemoveContainer" containerID="7cdf7bd477db962d997bc0b6d4ab7193abcfa4e081a6addace3d8a6f79435b23" Feb 16 16:01:25 crc kubenswrapper[4835]: I0216 16:01:25.796036 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" event={"ID":"d233f2c8-6963-48c1-889e-ef20f52ad5b1","Type":"ContainerStarted","Data":"88bc878e0c384828e2ae0f96fd69444db9e8dd7f288db511325750f9cf8e1a83"} Feb 16 16:01:27 crc kubenswrapper[4835]: E0216 16:01:27.382295 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 16:01:37 crc kubenswrapper[4835]: I0216 16:01:37.061855 4835 scope.go:117] "RemoveContainer" containerID="760cce77a4bb11b6e3379ae8f039fd8d97117a1f115f97d30b6834bdab05d6e7" Feb 16 16:01:41 crc kubenswrapper[4835]: E0216 16:01:41.388950 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 16:01:56 crc kubenswrapper[4835]: I0216 16:01:56.382644 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 16:01:56 crc kubenswrapper[4835]: E0216 16:01:56.512867 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 16:01:56 crc kubenswrapper[4835]: E0216 16:01:56.512921 4835 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 16:01:56 crc kubenswrapper[4835]: E0216 16:01:56.513063 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqdtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-sgzmb_openstack(3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 16:01:56 crc kubenswrapper[4835]: E0216 16:01:56.514237 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 16:02:07 crc kubenswrapper[4835]: I0216 16:02:07.255048 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-br7jz"] Feb 16 16:02:07 crc kubenswrapper[4835]: E0216 16:02:07.256481 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a354380-89ce-4b44-936f-c92ea10cfd8c" containerName="keystone-cron" Feb 16 16:02:07 crc kubenswrapper[4835]: I0216 16:02:07.256508 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a354380-89ce-4b44-936f-c92ea10cfd8c" containerName="keystone-cron" Feb 16 16:02:07 crc kubenswrapper[4835]: I0216 16:02:07.256889 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a354380-89ce-4b44-936f-c92ea10cfd8c" containerName="keystone-cron" Feb 16 16:02:07 crc kubenswrapper[4835]: I0216 16:02:07.259787 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-br7jz" Feb 16 16:02:07 crc kubenswrapper[4835]: I0216 16:02:07.262666 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-br7jz"] Feb 16 16:02:07 crc kubenswrapper[4835]: E0216 16:02:07.380986 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 16:02:07 crc kubenswrapper[4835]: I0216 16:02:07.406355 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxc6x\" (UniqueName: \"kubernetes.io/projected/2e29cfe7-9bea-4901-97a0-dd86d9a71835-kube-api-access-vxc6x\") pod \"redhat-operators-br7jz\" (UID: \"2e29cfe7-9bea-4901-97a0-dd86d9a71835\") " pod="openshift-marketplace/redhat-operators-br7jz" Feb 16 16:02:07 crc kubenswrapper[4835]: I0216 16:02:07.406452 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e29cfe7-9bea-4901-97a0-dd86d9a71835-utilities\") pod \"redhat-operators-br7jz\" (UID: \"2e29cfe7-9bea-4901-97a0-dd86d9a71835\") " pod="openshift-marketplace/redhat-operators-br7jz" Feb 16 16:02:07 crc kubenswrapper[4835]: I0216 16:02:07.406636 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e29cfe7-9bea-4901-97a0-dd86d9a71835-catalog-content\") pod \"redhat-operators-br7jz\" (UID: \"2e29cfe7-9bea-4901-97a0-dd86d9a71835\") " pod="openshift-marketplace/redhat-operators-br7jz" Feb 16 16:02:07 crc kubenswrapper[4835]: I0216 16:02:07.508442 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e29cfe7-9bea-4901-97a0-dd86d9a71835-catalog-content\") pod \"redhat-operators-br7jz\" (UID: \"2e29cfe7-9bea-4901-97a0-dd86d9a71835\") " pod="openshift-marketplace/redhat-operators-br7jz" Feb 16 16:02:07 crc kubenswrapper[4835]: I0216 16:02:07.508618 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxc6x\" (UniqueName: \"kubernetes.io/projected/2e29cfe7-9bea-4901-97a0-dd86d9a71835-kube-api-access-vxc6x\") pod \"redhat-operators-br7jz\" (UID: \"2e29cfe7-9bea-4901-97a0-dd86d9a71835\") " pod="openshift-marketplace/redhat-operators-br7jz" Feb 16 16:02:07 crc kubenswrapper[4835]: I0216 16:02:07.508685 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e29cfe7-9bea-4901-97a0-dd86d9a71835-utilities\") pod \"redhat-operators-br7jz\" (UID: \"2e29cfe7-9bea-4901-97a0-dd86d9a71835\") " pod="openshift-marketplace/redhat-operators-br7jz" Feb 16 16:02:07 crc kubenswrapper[4835]: I0216 16:02:07.509159 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e29cfe7-9bea-4901-97a0-dd86d9a71835-utilities\") pod \"redhat-operators-br7jz\" (UID: \"2e29cfe7-9bea-4901-97a0-dd86d9a71835\") " pod="openshift-marketplace/redhat-operators-br7jz" Feb 16 16:02:07 crc kubenswrapper[4835]: I0216 16:02:07.510779 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e29cfe7-9bea-4901-97a0-dd86d9a71835-catalog-content\") pod \"redhat-operators-br7jz\" (UID: \"2e29cfe7-9bea-4901-97a0-dd86d9a71835\") " pod="openshift-marketplace/redhat-operators-br7jz" Feb 16 16:02:07 crc kubenswrapper[4835]: I0216 16:02:07.535296 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxc6x\" (UniqueName: \"kubernetes.io/projected/2e29cfe7-9bea-4901-97a0-dd86d9a71835-kube-api-access-vxc6x\") pod \"redhat-operators-br7jz\" (UID: \"2e29cfe7-9bea-4901-97a0-dd86d9a71835\") " pod="openshift-marketplace/redhat-operators-br7jz" Feb 16 16:02:07 crc kubenswrapper[4835]: I0216 16:02:07.592458 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-br7jz" Feb 16 16:02:08 crc kubenswrapper[4835]: I0216 16:02:08.063270 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-br7jz"] Feb 16 16:02:08 crc kubenswrapper[4835]: I0216 16:02:08.179394 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-br7jz" event={"ID":"2e29cfe7-9bea-4901-97a0-dd86d9a71835","Type":"ContainerStarted","Data":"3d33ca9cc27fbd43cc896adda71e13eb5158e473f539b940d977da288299dc4d"} Feb 16 16:02:09 crc kubenswrapper[4835]: I0216 16:02:09.189279 4835 generic.go:334] "Generic (PLEG): container finished" podID="2e29cfe7-9bea-4901-97a0-dd86d9a71835" containerID="4eb85a20f0e20b9303e0c427fe0b379492b67703de53acba424aa31b4d61b25e" exitCode=0 Feb 16 16:02:09 crc kubenswrapper[4835]: I0216 16:02:09.189331 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-br7jz" event={"ID":"2e29cfe7-9bea-4901-97a0-dd86d9a71835","Type":"ContainerDied","Data":"4eb85a20f0e20b9303e0c427fe0b379492b67703de53acba424aa31b4d61b25e"} Feb 16 16:02:10 crc kubenswrapper[4835]: I0216 16:02:10.198628 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-br7jz" event={"ID":"2e29cfe7-9bea-4901-97a0-dd86d9a71835","Type":"ContainerStarted","Data":"804ca64f209f5b41a411520ce3fb56abf752d250ff13892de38aebba26c59aad"} Feb 16 16:02:11 crc kubenswrapper[4835]: I0216 16:02:11.208974 4835 generic.go:334] "Generic (PLEG): container finished" podID="2e29cfe7-9bea-4901-97a0-dd86d9a71835" containerID="804ca64f209f5b41a411520ce3fb56abf752d250ff13892de38aebba26c59aad" exitCode=0 Feb 16 16:02:11 crc kubenswrapper[4835]: I0216 16:02:11.209078 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-br7jz" event={"ID":"2e29cfe7-9bea-4901-97a0-dd86d9a71835","Type":"ContainerDied","Data":"804ca64f209f5b41a411520ce3fb56abf752d250ff13892de38aebba26c59aad"} Feb 16 16:02:12 crc kubenswrapper[4835]: I0216 16:02:12.223314 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-br7jz" event={"ID":"2e29cfe7-9bea-4901-97a0-dd86d9a71835","Type":"ContainerStarted","Data":"7aeecf7dec316acc79b298775a4007dbf8d3030e23ae4f674342f8f7db561978"} Feb 16 16:02:12 crc kubenswrapper[4835]: I0216 16:02:12.256146 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-br7jz" podStartSLOduration=2.84305941 podStartE2EDuration="5.256104084s" podCreationTimestamp="2026-02-16 16:02:07 +0000 UTC" firstStartedPulling="2026-02-16 16:02:09.190989493 +0000 UTC m=+3278.482982398" lastFinishedPulling="2026-02-16 16:02:11.604034137 +0000 UTC m=+3280.896027072" observedRunningTime="2026-02-16 16:02:12.246170695 +0000 UTC m=+3281.538163630" watchObservedRunningTime="2026-02-16 16:02:12.256104084 +0000 UTC m=+3281.548097019" Feb 16 16:02:17 crc kubenswrapper[4835]: I0216 16:02:17.592937 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-br7jz" Feb 16 16:02:17 crc kubenswrapper[4835]: I0216 16:02:17.593318 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-br7jz" Feb 16 16:02:18 crc kubenswrapper[4835]: I0216 16:02:18.651464 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-br7jz" podUID="2e29cfe7-9bea-4901-97a0-dd86d9a71835" containerName="registry-server" probeResult="failure" output=< Feb 16 16:02:18 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Feb 16 16:02:18 crc kubenswrapper[4835]: > Feb 16 16:02:20 crc kubenswrapper[4835]: E0216 16:02:20.381665 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 16:02:27 crc kubenswrapper[4835]: I0216 16:02:27.637748 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-br7jz" Feb 16 16:02:27 crc kubenswrapper[4835]: I0216 16:02:27.690427 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-br7jz" Feb 16 16:02:27 crc kubenswrapper[4835]: I0216 16:02:27.870088 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-br7jz"] Feb 16 16:02:29 crc kubenswrapper[4835]: I0216 16:02:29.374368 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-br7jz" podUID="2e29cfe7-9bea-4901-97a0-dd86d9a71835" containerName="registry-server" containerID="cri-o://7aeecf7dec316acc79b298775a4007dbf8d3030e23ae4f674342f8f7db561978" gracePeriod=2 Feb 16 16:02:29 crc kubenswrapper[4835]: I0216 16:02:29.899716 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-br7jz" Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.070046 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e29cfe7-9bea-4901-97a0-dd86d9a71835-catalog-content\") pod \"2e29cfe7-9bea-4901-97a0-dd86d9a71835\" (UID: \"2e29cfe7-9bea-4901-97a0-dd86d9a71835\") " Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.070120 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e29cfe7-9bea-4901-97a0-dd86d9a71835-utilities\") pod \"2e29cfe7-9bea-4901-97a0-dd86d9a71835\" (UID: \"2e29cfe7-9bea-4901-97a0-dd86d9a71835\") " Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.070169 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxc6x\" (UniqueName: \"kubernetes.io/projected/2e29cfe7-9bea-4901-97a0-dd86d9a71835-kube-api-access-vxc6x\") pod \"2e29cfe7-9bea-4901-97a0-dd86d9a71835\" (UID: \"2e29cfe7-9bea-4901-97a0-dd86d9a71835\") " Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.071840 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e29cfe7-9bea-4901-97a0-dd86d9a71835-utilities" (OuterVolumeSpecName: "utilities") pod "2e29cfe7-9bea-4901-97a0-dd86d9a71835" (UID: "2e29cfe7-9bea-4901-97a0-dd86d9a71835"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.077879 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e29cfe7-9bea-4901-97a0-dd86d9a71835-kube-api-access-vxc6x" (OuterVolumeSpecName: "kube-api-access-vxc6x") pod "2e29cfe7-9bea-4901-97a0-dd86d9a71835" (UID: "2e29cfe7-9bea-4901-97a0-dd86d9a71835"). InnerVolumeSpecName "kube-api-access-vxc6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.172561 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e29cfe7-9bea-4901-97a0-dd86d9a71835-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.172592 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxc6x\" (UniqueName: \"kubernetes.io/projected/2e29cfe7-9bea-4901-97a0-dd86d9a71835-kube-api-access-vxc6x\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.202667 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e29cfe7-9bea-4901-97a0-dd86d9a71835-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e29cfe7-9bea-4901-97a0-dd86d9a71835" (UID: "2e29cfe7-9bea-4901-97a0-dd86d9a71835"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.274629 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e29cfe7-9bea-4901-97a0-dd86d9a71835-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.391045 4835 generic.go:334] "Generic (PLEG): container finished" podID="2e29cfe7-9bea-4901-97a0-dd86d9a71835" containerID="7aeecf7dec316acc79b298775a4007dbf8d3030e23ae4f674342f8f7db561978" exitCode=0 Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.391079 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-br7jz" event={"ID":"2e29cfe7-9bea-4901-97a0-dd86d9a71835","Type":"ContainerDied","Data":"7aeecf7dec316acc79b298775a4007dbf8d3030e23ae4f674342f8f7db561978"} Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.391101 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-br7jz" event={"ID":"2e29cfe7-9bea-4901-97a0-dd86d9a71835","Type":"ContainerDied","Data":"3d33ca9cc27fbd43cc896adda71e13eb5158e473f539b940d977da288299dc4d"} Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.391130 4835 scope.go:117] "RemoveContainer" containerID="7aeecf7dec316acc79b298775a4007dbf8d3030e23ae4f674342f8f7db561978" Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.391156 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-br7jz" Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.434778 4835 scope.go:117] "RemoveContainer" containerID="804ca64f209f5b41a411520ce3fb56abf752d250ff13892de38aebba26c59aad" Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.451506 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-br7jz"] Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.466284 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-br7jz"] Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.467700 4835 scope.go:117] "RemoveContainer" containerID="4eb85a20f0e20b9303e0c427fe0b379492b67703de53acba424aa31b4d61b25e" Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.553370 4835 scope.go:117] "RemoveContainer" containerID="7aeecf7dec316acc79b298775a4007dbf8d3030e23ae4f674342f8f7db561978" Feb 16 16:02:30 crc kubenswrapper[4835]: E0216 16:02:30.554233 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aeecf7dec316acc79b298775a4007dbf8d3030e23ae4f674342f8f7db561978\": container with ID starting with 7aeecf7dec316acc79b298775a4007dbf8d3030e23ae4f674342f8f7db561978 not found: ID does not exist" containerID="7aeecf7dec316acc79b298775a4007dbf8d3030e23ae4f674342f8f7db561978" Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.554313 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aeecf7dec316acc79b298775a4007dbf8d3030e23ae4f674342f8f7db561978"} err="failed to get container status \"7aeecf7dec316acc79b298775a4007dbf8d3030e23ae4f674342f8f7db561978\": rpc error: code = NotFound desc = could not find container \"7aeecf7dec316acc79b298775a4007dbf8d3030e23ae4f674342f8f7db561978\": container with ID starting with 7aeecf7dec316acc79b298775a4007dbf8d3030e23ae4f674342f8f7db561978 not found: ID does not exist" Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.554359 4835 scope.go:117] "RemoveContainer" containerID="804ca64f209f5b41a411520ce3fb56abf752d250ff13892de38aebba26c59aad" Feb 16 16:02:30 crc kubenswrapper[4835]: E0216 16:02:30.555043 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"804ca64f209f5b41a411520ce3fb56abf752d250ff13892de38aebba26c59aad\": container with ID starting with 804ca64f209f5b41a411520ce3fb56abf752d250ff13892de38aebba26c59aad not found: ID does not exist" containerID="804ca64f209f5b41a411520ce3fb56abf752d250ff13892de38aebba26c59aad" Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.555093 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"804ca64f209f5b41a411520ce3fb56abf752d250ff13892de38aebba26c59aad"} err="failed to get container status \"804ca64f209f5b41a411520ce3fb56abf752d250ff13892de38aebba26c59aad\": rpc error: code = NotFound desc = could not find container \"804ca64f209f5b41a411520ce3fb56abf752d250ff13892de38aebba26c59aad\": container with ID starting with 804ca64f209f5b41a411520ce3fb56abf752d250ff13892de38aebba26c59aad not found: ID does not exist" Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.555115 4835 scope.go:117] "RemoveContainer" containerID="4eb85a20f0e20b9303e0c427fe0b379492b67703de53acba424aa31b4d61b25e" Feb 16 16:02:30 crc kubenswrapper[4835]: E0216 16:02:30.555598 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eb85a20f0e20b9303e0c427fe0b379492b67703de53acba424aa31b4d61b25e\": container with ID starting with 4eb85a20f0e20b9303e0c427fe0b379492b67703de53acba424aa31b4d61b25e not found: ID does not exist" containerID="4eb85a20f0e20b9303e0c427fe0b379492b67703de53acba424aa31b4d61b25e" Feb 16 16:02:30 crc kubenswrapper[4835]: I0216 16:02:30.555623 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eb85a20f0e20b9303e0c427fe0b379492b67703de53acba424aa31b4d61b25e"} err="failed to get container status \"4eb85a20f0e20b9303e0c427fe0b379492b67703de53acba424aa31b4d61b25e\": rpc error: code = NotFound desc = could not find container \"4eb85a20f0e20b9303e0c427fe0b379492b67703de53acba424aa31b4d61b25e\": container with ID starting with 4eb85a20f0e20b9303e0c427fe0b379492b67703de53acba424aa31b4d61b25e not found: ID does not exist" Feb 16 16:02:31 crc kubenswrapper[4835]: I0216 16:02:31.391801 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e29cfe7-9bea-4901-97a0-dd86d9a71835" path="/var/lib/kubelet/pods/2e29cfe7-9bea-4901-97a0-dd86d9a71835/volumes" Feb 16 16:02:34 crc kubenswrapper[4835]: E0216 16:02:34.380804 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 16:02:49 crc kubenswrapper[4835]: E0216 16:02:49.394578 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 16:03:02 crc kubenswrapper[4835]: E0216 16:03:02.383457 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 16:03:15 crc kubenswrapper[4835]: E0216 16:03:15.380958 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 16:03:28 crc kubenswrapper[4835]: E0216 16:03:28.380658 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 16:03:39 crc kubenswrapper[4835]: E0216 16:03:39.379985 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-sgzmb" podUID="3826ee8a-e4a8-422e-a9a1-a4d671f6d8e1" Feb 16 16:03:48 crc kubenswrapper[4835]: I0216 16:03:48.586174 4835 patch_prober.go:28] interesting pod/machine-config-daemon-nd4kl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 16:03:48 crc kubenswrapper[4835]: I0216 16:03:48.586807 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nd4kl" podUID="d233f2c8-6963-48c1-889e-ef20f52ad5b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"